Does English law need a statutory reliability test for expert evidence in criminal proceedings? Reliability test in other
jurisdictions have not successfully dealt with the thorny issue of reliability
of expert evidence.
DR KS Dhillon. FRCS, LLM
(1)Introduction
No
criminal justice system can guarantee that miscarriage of justice will not
occur but it can put in place procedures that will reduce the risk of wrongful
convictions. The reality is making difficult choices between competing values.
In the late 1980’s and early 1990’s a number of high profile miscarriages of
justice came to light including the Birmingham Six, the Guildford Four, the Maguires,
the Tottenham Three , the Cardiff Three and the Taylor Sisters.[1] Two public inquiries were established to
improve the English justice system and restore public confidence.[2]In
fact on the same day as the conviction of the Birmingham Six was quashed, the
Home Sectary, Kenneth Baker announced the establishment of the Royal Commission
on Criminal Justice (Runciman Commission, 1993).The Commission had far reaching
terms of reference but since the forensic evidence and expert testimony played
a prominent role in several miscarriages of justice, one of the terms of
reference was to address the issue of forensic science and expert evidence. It
recommended improving the forensic science services by exposing it to market
principles and the setting up of a Forensic Science Advisory Council. It also
recommended a ‘modified and improved adversarial procedure for preparation and
presentation of scientific evidence’.[3]
The
May Inquiry (1994) into the circumstances surrounding the miscarriage of
justice in relation to the Guildford Four and Maguire Seven raised considerable
doubts about the forensic science evidence. The report ‘apportioned most of the
responsibility to individuals, particularly the individual scientist’.[4]This
individual failing according to the report was not due to weakness or fault of
the criminal justice system and no rules in the system could provide complete protection
from these failings.
More
recently miscarriages of justice in Clarke
(2003) and Cannings (2004), which
sparked a public outcry, were also widely attributed to failings of the expert
witness testimony. The House of Commons Science and Technology Committee lamented
that the expert evidence in criminal proceedings was being admitted too readily
and without sufficient scrutiny and the committee called for reforms.[5] In
response the Law Commission first carried out a consultation in 2009[6] and subsequently published its report, ‘Expert
evidence in criminal proceedings in England and Wales’ in 2011.[7]
Attached to the report was a draft bill which recommended a statutory
reliability test for admission of expert evidence in criminal proceedings.
Reliability
test in other common law jurisdictions have not successfully dealt with the
thorny issue of expert evidence reliability. There have been claims that
reliability tests in many common law jurisdictions have failed to keep
incriminating scientific evidence out of the courts.[8] The
criminal justice system’s dilemma is its heavy reliance on forensic science
evidence. Unfortunately many kinds of forensic science evidence, with the
exception of DNA evidence, lack the scientific foundation which is necessary
for the formulation of useful reliability tests for admissibility of expert
testimony. Contrary to what many believe even published and peer review studies
have weakness which have been highlighted in recent years.
This
article aims to analyse and highlight some of the current weaknesses in the state
of law on admissibility of scientific medical and forensic evidence in criminal
proceedings in major common law jurisdictions. A review of the current
scientific and forensic evidence proffered in criminal trials reveals that
there are many inherent reliability issues which make it difficult for the
courts to approach admissibility requirements based on such evidence. Implementing
strict reliability requirements would deprive the courts of useful relevant
expert evidence which has yet to come up with methods of determining
reliability. Furthermore strict reliability test may prevent the courts from
keeping pace with scientific development. An analysis of some of the high
profile miscarriages of justice shows that tightening the rules of
admissibility of scientific/forensic evidence is unlikely to resolve the
perennial problems associated with unreliable expert evidence. These weakness
need to be addressed by other reforms. Tightening the rules of admissibility of
expert evidence may reduce pragmatism, flexibility and judicial discretion
which are essential for the law to develop in tandem with the development of
the sciences.
(2) Admissibility Standards for Expert Evidence
The
‘relevancy test’ for expert testimony is consistently used in all common law
jurisdictions. The ‘reliability test’ for admissibility of scientific evidence
has been in existence in Australia since 1912, in the US since 1923 and in the
UK since 1953, yet the rules of admissibility remain unsettled and are mired in
controversies. The legal community is struggling with the issue of
admissibility standards for expert evidence testimony in criminal proceedings.
The United States with federal statutory reliability standards has not been
able to steer clear of inconsistencies and controversies.
(A)Standards in England and Wales
The
principle of admissibility of scientific opinion in English law dates back to 1782[9].
The test for admissibility of expert evidence was defined in R v Silverlock [1894] 2 Q.B. 766.This
case concerned the admissibility of hand writing expert testimony. Here the
test of admissibility in relation to expert evidence was possession of the
required skill (expertise) and the expertise need not be obtained by formal
qualification. Whether the witness is an expert in the required field is for
the tribunal to decide and for the witness to prove his expertise.[10] Bingham
L J in R v Robb[11]
reaffirmed this test of admissibility.[12] Bingham
L.J went on to say that expert opinion can be given not only in established
field of science but also in other areas such as handwriting, finger printing, and
accident reconstruction as well as in literary fields of work. However the
court will not admit the evidence of ‘an astrologer, a soothsayer or an amateur
psychologist’. [13]The
‘relevancy test’ for admission of expert testimony was defined in R v Turner [1975] QB 834, where Lawton L
J said,
[a]n expert's opinion is admissible to
furnish the court with scientific information which is likely to be outside the
experience and knowledge of a judge or jury. If on the proven facts a judge or
jury can form their own conclusions without help, then the opinion of an expert
is unnecessary.
The
English Courts have been quite liberal in admitting incriminating expert
opinions without consistently requiring ‘reliability’ as a prerequisite for
admission of the evidence.[14]
This has been seen in a number of cases in relation to facial mapping.[15]
Issues such as unreliability, exaggeration or misrepresentation are of probative
value. These are left to the tribunal of fact to take into consideration when
deciding on the ultimate issue. In fact, Gage L J in R v Harris said that ‘… developments in scientific thinking should
not be kept from court, simply because they remain at the stage of hypothesis’.[16]
Though
English law has no ‘gate keeper’ test of reliability for admissibility of
scientific evidence, the courts have inconsistently applied various reliability
test by ‘drawing vague analogies with approaches taken in other jurisdictions
and adopting statements of principles provided by foreign courts’.[17]In
R v Gilfoyle[18],
Rose L J rejected Professor Canter’s ‘psychological autopsy’ of an alleged
murder victim, whom the defendant claimed had committed suicide, although he
was ‘clearly an expert in his field’, because the evidence ‘tendered was not
expert evidence of a kind to be properly placed before the court’.[19]
The Court in rejecting Professor Cantor’s testimony referred to United States v Frye [20]
which requires expert testimony to be based on evidence ‘which must be
sufficiently established to have gained general acceptance in the particular
field in which it belongs’.[21]
The
Courts of Appeal in R v Luttrell[22]
and R v Ciantar[23]
did place emphasis on the need for reliability of expert evidence as criteria
of admissibility. Rose L J in Lutterll,
while admitting expert evidence on lip reading, spoke of the authority which
would make the expert witness’s opinion admissible. He quoted King C J in The Queen v Bonython[24], that the subject matter of the
expert’s opinion should form ‘…part of a body of knowledge or experience which is
sufficiently organised or recognised to be accepted as a reliable body of
knowledge or experience…’.[25]
In Ciantar, Moses L J in admitting
evidence on facial mapping reiterated that ‘the subject matter …. must be part of a body of knowledge or
experience recognised to be a reliable body of knowledge’ and ‘[s]econdly, the
witness must be qualified to express an opinion by reason of his special
acquaintance with that body of knowledge’.[26] Bonython though an Australian case has been
widely cited in English law with regards to admissibility of expert evidence.[27]The
Bonython approach though widely cited in England and Wales, it is not binding
in English law and neither is it universally accepted, even in Australia. In R v Parenzee[28]
the court explained that the principles enunciated in Bonython incorporated the Frye
acceptance test. So has the Australian Law Reform Commission in its review of
Uniform Evidence Acts affirmed that the law in South Australia incorporated Frye general acceptance test.[29]In
Gilfoyle the court did refer to the Frye test in rejecting admission of
expert testimony. However the court in Dallagher[30]
rejected the Frye test stating that
the Frye test has been superseded by
the Federal Rules of Evidence.[31]
There seems to be no uniformity in English courts regarding admissibility of
expert evidence. The prevailing practise in English courts regarding reception
of expert evidence appears to be one of pragmatism and flexibility to allow the
court to ‘enjoy the advantages to be gained from new techniques and new
advances in science’.[32]The
Court of Appeal in R v Reed and Reed[33]
has indicated the current common law stand on admissibility of expert evidence
in the following words:
There is …no enhanced test of admissibility ….
If the reliability of the scientific basis for the evidence is challenged, the
court will consider whether there is a sufficiently reliable scientific basis
for that evidence to be admitted, but, if satisfied that there is a
sufficiently reliable scientific basis for the evidence to be admitted, then it
will leave the opposing views to be tested in the trial.[34]
In
England and Wales there is no reliability test for admissibility of expert
testimony. The courts use the ‘relevancy test’ for admissibility of such
evidence, where the courts decide whether the evidence is relevant, the witness
is a qualified expert (Silverlock
test) and the evidence will assist the trier of fact (Turner test). Once the evidence has passed the ‘relevancy test’ and
has been admitted, than it is up to the jury to determine the reliability of
the evidence through adversarial cross examination. This can create problems,
when the opposing party does not have sufficient understanding to ask the
relevant questions during cross examination[35] or
when the expert testimony is too complex and overwhelming for the jury and this
may potentially lead the jury to place too much reliance on such evidence.[36]
What
is sufficiently reliable scientific evidence remains undefined and is left to
the judge to decide. The courts have placed much trust in the adversarial
system to weed out unreliable evidence notwithstanding the fact that the NAS
report has expressed doubts that such a system is ‘suited to the task of
finding “scientific truth”’.[37]
(B) Standards In USA
A
common law reliability test for admissibility of novel scientific evidence was
first introduced in the US in 1923. In Frye
v United States[38]the
court ruled that a defendant in murder trial cannot use a precursor of the lie detector
test as exculpatory evidence. The court defined the test for admissibility of
scientific evidence:
Just when a scientific
principle or discovery crosses the line between experimental and demonstrable
stage is difficult to define. Somewhere in this twilight zone the evidentiary
force of the principle must be recognised, and while the courts will go a long
way to admitting expert testimony deduced from a well-recognised scientific
principle or discovery, the thing from which the deduction is made must be
sufficiently established to have gained general acceptance in the particular
field to which it belongs.[39]
Frye came
to be widely known as the ‘general acceptance’ rule which prohibited admission
of expert evidence unless the scientific evidence was generally accepted by the
scientific community in the relevant scientific discipline. The courts were
left to decide who the relevant scientific community was and also to decide
whether the scientific community accepted the scientific principle.[40]Historically,
according to Robert Kohar, in some
jurisdictions such as in Illinois where Frye
is strictly followed, ‘it is only
necessary that the expert make the statement or offer evidence that the
methodology employed is accepted in his or her discipline’ for the court to
admit such evidence in most cases.[41]The
Frye rule was often applied in
criminal cases and less often in civil matters and in testimony involving ‘soft
or social’ sciences because it was more difficult apply in these areas.[42]The
problem with Frye acceptance rule was
that new novel scientific principles although sound were kept out of the courts
especially in criminal cases.[43]
In
1975 when the United States Federal Rules of Evidence were introduced the
issues relating to Frye were not
addressed. The Federal Rule 702 (1975) regarding testimony of expert states
that,
[i]f
scientific, technical, or other specialised knowledge will assist the trier of
fact to understand the evidence or to determine a fact in issue, a witness
qualified as an expert by knowledge, skill, experience, training or education
may testify thereto in a form of opinion or otherwise.[44]
After
1975 majority of the courts continued to use the Frye general acceptance rule, but soon a split in the federal and
state courts began to appear with some using the Frye test and others different versions of the Federal Rule of Evidence.[45]The
lack of clarity and confusion over the relationship of the Frye test and Federal Rule of Evidence, prompted the Supreme Court
in Daubert v Merrell Dow Pharmaceuticals
Inc.[46],
to declare that Frye test was no
longer a federal rule, in the area of expert evidence. The court also ruled
that Rule 702 applied to all scientific evidence and not only to novel scientific
evidence.[47]Rule
702 is essentially a relevancy rule which allows admission of ‘scientific,
technical or other specialised knowledge’ if it is provided by an expert and it
will assist the trier of fact. The court in Daubert
however interpreted ‘scientific knowledge’ as a ‘standard evidentiary test’
based on ‘scientific validity’.[48]The
court ruled that the basis of evaluating scientific evidence be based on four
considerations:
·
Whether
the reasoning or methodology has been tested (falsibility).
·
Whether
the reasoning or methodology has been subjected to peer review and publication.
·
Potential
rate of error of methodology or reasoning.
·
Whether
the methodology or reasoning has been generally accepted or respected in the relevant
scientific community.[49]
Chief
Justice Rehnquist and Justice Steven concurred in part and dissented in part
with the judgement in Daubert.
Rehnquist CJ had doubts that judges will be able to evaluate evidentiary
reliability based on scientific validity as ‘it imposes on them either the
obligation or the authority to become amateur scientist in order to perform the
role’ and his opinion was to’ leave further development of this important area
of law to future cases’.[50]Rehnquist
CJ’s pessimism has been borne out by subsequent research in US which shows that
judges there have difficulty in applying the Daubert criteria effectively.[51]
In the U.S, some states have adopted the Daubert
standard but others have rejected it and continue to retain Frye. In fact the state of Wisconsin in
US still uses the relevancy test for admission of scientific or other expert
evidence. It uses a three part test to determine relevance; (1) the court
determines whether the evidence is relevant, (2) whether the witness is qualified
as an expert and (3) whether the evidence will assist the trier of fact.[52]
Studies have also found that adoption of the Daubert test at state or federal level has had no statistical
significant effect on admission rate of scientific evidence.[53]
The
Daubert test remains the current law
in the United States Federal Courts. Two subsequent cases, General Electric Co. v Joiner[54]
and Kumho Tire Co. v Carmichael[55]
extended the gatekeeper role of the courts to include non-scientific expert
evidence. In 2000 the Federal rules of evidence were amended to incorporate the
Daubert criteria.[56]A
case law survey of the standards governing admissibility of scientific expert
testimony in the US, carried out by Lustre in 2009, showed that 25 states were
governed by Daubert standards, 15
states and District of Columbia by Frye
standards and 6 states had a combination of Frye
and Daubert standards and 4 states
have developed their own standards.[57]
The State of Wisconsin has rejected all reliability tests for admissibility of
scientific evidence and continues to use the relevancy test.
The
majority of criminal cases in the US are heard at state courts and there is yet
no uniformity of the standards governing admissibility of scientific evidence
in criminal cases in these courts. This may be due to a need to admit evidence,
to answer questions of fact, where the evidence does not meet the requirements
of a standard test governing admissibility of expert testimony. Even Federal Courts
that apply Daubert standards
regularly admit evidence with unproven methodologies.[58]
(C) Standards in Canada
Commonwealth
countries generally apply the relevancy test for admission of expert evidence,
however the admission criteria varies from case to case. Canadian jurisprudence
for admission of expert evidence has generally shifted towards adopting reliability test with some of it being
adopted from US jurisprudence. In modern Canadian jurisprudence the most
important decision regarding admissibility of expert evidence was made in R v Mohan.[59]This
was an appeal from the Ontario Court of Appeal, where a paediatrician was
charged with sexual assault of his patients and an expert witness testified
that the defendant did not fit into the psychological profile of a perpetrator
of such a crime. The question for the Supreme Court was whether the expert
evidence was admissible which the Court of Appeal had allowed. The Supreme
Court excluded the evidence and restored the conviction.
Sopinka
J, giving judgement in the case outlined four criteria for admission of expert
opinion evidence:
(1)Relevance
(2)Necessity in
assisting the trier of fact
(3) Absence of
exclusionary rule
(4) A properly
qualified expert[60]
Relevance-
a question of law to be decided by the judge is governed by logical relevance
of the evidence and it has to be considered with other factors in mind, such
as, time consumed, resources consumed and the probative value of the evidence
as well as its prejudicial effect.[61]
Necessity-
the information must be necessary to ‘enable the trier of fact to appreciate
the matters in issue due to their technical nature’. This precondition standard
of necessity should not be set too high or too low (helpful). Too low a
standard would result in the trial becoming ‘nothing more than a contest of
experts with the trier of fact acting as a referee…’.[62]
The
third factor – absence of exclusionary rule, should take into consideration the
rules of evidence (exclusionary rules) governing the trial.[63]The
fourth factor-( A properly qualified expert)- the evidence must be given ‘by a
witness who is shown to have acquired special or peculiar knowledge through the
study or experience in respect of matters on which he or she undertakes to
testify’.[64]
Sopinka
J summarised the four criteria in the following words:
it appears from the foregoing that expert evidence which advances a
novel scientific theory or technique is subjected to special scrutiny to
determine whether it meets a basic threshold of reliability and whether it is
essential in the sense that the trier of fact will be unable to come to a
satisfactory conclusion without the assistance of the expert. The closer the evidence approaches an opinion
on an ultimate issue, the stricter the application of this principle.
The above four criteria
mirrors the English admissibility criteria with an additional caveat that
expert evidence involving ‘novel scientific theory or technique’ be subjected
to special scrutiny before it is deemed reliable enough to be admissible.
The
court in R v J-L.J[65]subsequently
consolidated the concept of reliability as a threshold for admission of expert
evidence. Binnie J in delivering the judgement said that ‘Mohan kept the door open to novel science, rejecting “general
acceptance” test formulated in “Frye”
and moving in parallel with its replacement, the “reliable foundation” test
laid down in Daubert’[66],
although Mohan had not referred to Daubert. Mohan and J-L.J
reiterated the need to protect the judge and jury against ‘junk science’.[67]Both
Mohan and J-L.J placed emphasis on the need to satisfy the trail judge that
the evidence offered has ‘underlying principles’ and ‘methodology’ which was ‘reliable
and more importantly applicable’.[68]The
judgement also drew attention to the influential case of Davie v Magistrates of Edinburgh (1953)[69]
which called for the experts to furnish the ‘necessary scientific criteria for
testing the accuracy of their conclusion’.[70]
In
R v Trochym[71],
the J –L.J approach to admissibility
of expert evidence was affirmed in a divided court. Justice Deschamps,
providing the decision for the majority, referred to a number of high profile
wrongful convictions and reiterated the ‘need to carefully scrutinise evidence
presented against the accused for reliability and prejudicial effect to ensure
basic fairness of the criminal process’.[72]
Deschamps
J stressed that,
[r]eliability is an
essential component of admissibility. Whereas the degree of reliability
required by the courts may vary depending on the circumstances, evidence that
is not sufficiently reliable is likely to undermine the fundamental fairness of
the criminal process.[73]
The
court revisited the Daubert criteria
of reliability that was taken into consideration in J-L.J.[74]
The
Canadian jurisprudence has in recent years moved towards the requirement of
reliability as a standard for admissibility of expert evidence but the courts
have not ‘explained how indicia of reliability… should be weighted or applied’.[75]
Gary Edmond sums up the current state of law on admissibility of scientific
evidence in Canada by saying that ‘as things stand a vague concept of
reliability erratically impacts upon Canadian expert evidence jurisprudence and
legal practice’.[76]The
Canadian Supreme Court tends to decide on admissibility of scientific evidence
on a case to case basis,[77] and
lacks uniform standards for admissibility of expert testimony.
(D) Standards in Australia
In
Australia too, the rules for admissibility of expert evidence remains
unsettled. The law here has not resolved the test of ‘field of expertise’.[78]Some
courts have adopted the Frye test and
others the Daubert reliability test
or a combination of both.[79]Incidentally
the general acceptance test existed in Australian law long before Frye in the US. In R v Parker[80]the
court held that fingerprint evidence could be admitted as evidence if the
theory supporting the evidence was ‘generally recognised by scientific men’.
Following
a high profile miscarriage of justice in R
v Chamberlain[81],
the Australian Law Reform Commission (ALRC), in 1985, was tasked with reviewing
the existing standards of admissibility of expert evidence. ALRC in its interim
report rejected Frye test as well as
the need for a reliability test, stating that reliability is not an admission
criteria but is of probative value.[82]
In 1995 the Australian Parliament passed the Evidence Act 1995. Federal courts
and courts in Australian Capital Territory apply the law in this Act while New
South Wales, Tasmania and Norfolk Island have passed mirror legislation. Section
79 of the Evidence Act[83] states
that, ‘[i]f a person has specialised knowledge based on person’s training,
study or experience, the opinion rule does not apply to evidence of an opinion
of that person that is wholly or substantially based on that knowledge’.
The
rule is quite similar to the US Federal Rule of Evidence 702. Section 79 refers
to ‘specialised knowledge’ while Rule 702 refers to ‘scientific, technical or
other specialised knowledge’. Both Rule 702 and s79 are vague and this gives
the courts some flexibility and opportunity to interpret the rules as the
situation demands. The court in Daubert however
has interpreted Rule 702 and established a set of criteria for admissibility
and subsequently Rule 702 was amended in 2000 to accommodate the reliability
criteria of Daubert.[84]However
the interpretation of s79 has given rise to a lot of controversy. The
definition of ‘scientific knowledge’ remains undefined.
At
common law in Australia, Clark v Ryan[85]
and Bonython v R[86]have
confirmed that the ‘field of expertise’, requirement is based on the
requirement that opinion should be derived from ‘body of knowledge which is
both ‘organised’ and accepted’.[87]This
is the threshold requirement of evidentiary reliability in most states in
Australia.[88]This
is similar to the requirement of general acceptance test in Frye. Einstein J in Idoport Pty Ltd v National
Australian Bank Ltd[89]was
of the opinion that s79 represents a rejection of Frye test. In R v Tang[90],
Spigelman CJ held that the meaning of ‘knowledge’ in s79 is the same as in Daubert. Spigelman CJ in Tang also held that evidentiary
reliability is not a consideration under s79.[91]Interpretation
of statutory rule s79 remains unsettled. The statutory rule for admissibility of
expert evidence in Australia has left the legal community more confused than
ever before. No wonder Gary Edmond has been critical of the expert evidence
admissibility law in Australia. He says that in New South Wales and Australia
in general, ‘judges have not exhibited much interest in reliability of expert
opinion evidence’[92]
which has allowed incriminating expert testimony of doubtful reliability to be
admitted despite the existence of statutory rules governing admissibility.
We
need to be cognisant of the fact that Australian law has had a test of general
acceptance eleven years before Frye
came into existence. In R v Parker,
Justice Malden held that fingerprint evidence can be admitted if the
individuality of fingerprint ‘were generally recognised by scientific men’ or
was ‘sufficiently studied to enable these propositions to be laid down on
scientific fact’.[93]
The
law in UK too has had a reliability test since 1953. The Court in Davie v Magistrates of Edinburgh[94]held
that the duty of expert witness is ‘to furnish the judge or jury with the
necessary scientific criteria for testing the accuracy of their conclusions, so
as to enable the judge or jury to form their own independent judgment by the
application of these criteria to the facts proved in evidence’.[95]Despite
the fact that the law in Australia had a reliability test since 1912 and the US
since 1923, yet rules of admissibility of expert scientific evidence remain unsettled
in the common law jurisdiction. In the US, state courts still have no single
uniform test for admissibility of expert evidence. In the US federal courts,
judges have difficulty in applying the Daubert
criteria as we have seen earlier. The only rule that is settled is the
‘relevancy test’ for admissibility of expert evidence.
The reason for this unsettled state of affairs
with the law appears to be borne out of a need to admit expert evidence to
answer questions of fact, despite reliability issues with certain types of
scientific expert testimony. These reliability issues prevent the formulation of
standard reliability tests. Flexible admission standards appear to be the way
forward until these reliability issues can be ironed out. What are these
reliability issues with scientific expert testimony?
(3) Reliability of Scientific Evidence
The
Law Commission report on expert evidence in England and Wales (Law Com. No 325)[96]
has defined reliable expert opinion as one that is ‘soundly based’ and that
which is not sufficiently reliable as:
·
Opinion
based on a hypothesis which has not been subjected to sufficient scrutiny (including,
where appropriate, experimental or other testing), or which has failed to stand
up to scrutiny.
·
Opinion
based on flawed data
·
Opinion
which relies on an examination, technique, method or process which was not
properly carried out or applied….
·
The
opinion relies on an inference or conclusion which has not been properly
reached.[97]
The
schedule to the draft bill provides a list of ‘generic factors’[98] for
scientific, technical and other type of expertise, and some of them were;
·
Validity
of methods by which data was obtained
·
Has
the ‘accuracy or reliability of the results’ taken into consideration the
‘degree of precision or margins of uncertainty’
·
Whether
the ‘material upon which the expert’s opinion is based has been reviewed by
others with relevant expertise (for instance, in peer review publications), and
the views of those others on the material’[99]
The
definition of reliability in the Law Commission report is not as precise as the
criteria laid down in Daubert. The
reliability criteria as laid down in Daubert
offer trial judges five criteria to consider for admissibility of expert
evidence;
·
Whether
the theory or technique ‘can be (and has been) tested’ (falsibility)
·
‘Whether
the theory or technique has been subjected to peer review and publication’
·
Whether
the ‘known or potential error rate’ is acceptable
·
Whether
standards exist and are maintained to control the techniques’ operation
·
Whether
the technique or theory is generally accepted[100]
The
question remains whether all medical and forensic expert evidence currently
proffered in criminal trials can stand up to such rigorous scrutiny. The National
Academic of Sciences (NAS) report, which was generated by a highly qualified
and respected committee in USA, noted that ‘[t]he forensic science system,
encompassing both research and practise, has serious problems …’.[101] The report acknowledged that advances in
forensic disciplines (esp. DNA) have helped solve many crimes but at the same
time wrongful conviction of innocent people may have taken place due to faulty
forensic analysis. What are these serious problems?
Some
areas of medical and forensic expert evidence which has generated much academic
debate is reviewed below, to see whether such evidence meets the reliability
requirements stipulated by the Law Commission and to assess weaknesses and
strengths of the evidence.
(A)DNA- ‘God’s Signature’?
The
most common biological evidence obtained in criminal investigation is samples
of blood, semen, and saliva, besides many others which contain nuclear DNA. DNA
typing is now universally accepted as a gold standard because of its
reliability.[102]DNA
profiling was first developed in the 1980’s and by 1990’s had become
established enough to be called a gold standard in forensic science.[103]
(1)DNA Profiling
The DNA revolution in
forensic science has come a long way since it was discovered by Alec Jefferys
and his team in 1984. They discovered hypervariable loci termed
‘minisatellites’ and developed a technique called ‘multilocus probe’ (MLP)
which used enzymes to dissolve the DNA into fragments, and with the help of
markers a pattern of bands was created which could be matched.[104]In
the mid 1990’s Multiple Short Random Repeat (STR) technique was introduced
which uses sequences of shorter length. This was made possible by a polymerase
chain reaction (PCR) which permitted smaller amount of genetic material to be
amplified.[105]Initially
STR profiling was done with a single locus probe but subsequently two highly
variable complex STR’s were added to reduce random match probabilities to about
1 in 50 million. These were termed as ‘second generation multiplex’ (SGM).In
year 2000 four more loci were added to this multiplex and it came to be known
as SGM plus (SGM+) which further reduced the match probability to less than 10-13.[106]
According to Bentley
and Lownds[107],
there are two types of commercial kits commonly used in the UK; the SGM+ and
the Identifiler. The SMG+ targets 10 loci and the Identifiler 15 loci and both
kits target an additional locus (amelogenin) which identifies the sex of the
contributor. The result of the analysis is represented by a graph called the
electropherotogram (EPG) which is then interpreted by the scientist.
Interpretation is usually not a problem when there is good quality and quantity
of DNA material from one source. However the problem arises when the quantity
of DNA is less than 0.5 nanogram. These samples are called low template DNA
(LTDNA). Existing techniques for analysis of such samples were uninformative,
which led to the development of a new technique- the Low Copy Number (LCN)
analysis where the amplification cycle is increased and the samples are
chemically enhanced to give better results.
The graphs produced by this method can produce ‘stochastic random
effects’ which include a peak obtained from ‘background noise’, contamination
(from other source) or ‘stutter’ (false peak obtained by the analytic process).[108]These
random effects can generate ‘partial or incomplete’ peaks on the graph. This is
where the interpretation of results can become very subjective. With LTDNA it
is often not possible to provide reliable statistical random match
probabilities. Computer programmes have been developed to overcome subjectivity
in the analysis but such science is not yet established.[109]
DNA profiling raises
both scientific and legal issues. For DNA profiling, the print has first to be
created, then it has to be established if the prints match and finally the
frequency of random match probabilities in the reference population have to be
calculated.[110]
(2)Creating a
DNA print
The creating of the
print is usually a straight forward process in diagnostics and research where
ample amounts of DNA material are available and the tests can be repeated to
produce good results. However in forensic science where the samples are often
minute the tests cannot be repeated. Furthermore the samples are often mixtures
of more than one person and to complicate matters the samples are often
contaminated. The situation with SGM+ and Identifiler testing is not as bad as
with LTDNA testing due to the quantity of DNA material available. The NRC report[111]concluded
that ‘[t]he current laboratory procedures for detecting DNA variation…is
fundamentally sound’,[112]but
it stressed that there is ‘a need for standardization of laboratories
procedures, proficiency testing and accreditation of laboratories in order to
assure quality of forensic analysis’.[113]It
also recommended that laboratory error rates as determined by proficiency
testing should be disclosed to juries.[114]
(3)Matching of
the DNA Prints
After the print is
obtained, the current procedure in laboratories, is to declare whether the pair
of samples match or don’t match. Some laboratories have a third category of an
inconclusive match. Some experts however favour a likelihood ratio of a match
rather than a rigid match or no match approach.[115]Matching
can be made difficult by the variations in the DNA prints which is caused by
the quality of the sample (degradation can reduce the intensity of the bands)
and the testing conditions.[116]This
variation can produce observer errors. Having mixed samples from multiple
sources can make the matching more complicated.
(4)Estimation of
Frequency of Match Obtained
The most controversial
aspect of DNA testing is statistical estimation of the frequency of match in
the reference population. It is necessary to know the statistical estimate so
that a related person in the reference population does not match by chance.[117]Frequency
estimates of each band or allele are obtained from the national database. Based
on these frequency estimates of the individual allele, the overall frequency of
the match is obtained ‘using formulae that assure that the alleles are
statistically independent’.[118]The
problem with these estimates is the assumption that the frequency of alleles is
independent, since there are variabilities in the frequency of alleles in
population subgroups from different racial background. The frequency of
heterogeneous genotype are calculated by a formula, 2 pq (p and q are the two
alleles in the genotype). If the frequency of band A is 0.03 and the frequency
of band D is 0.05 then the frequency of genotype would be 0.003(0.03 x 0.05 x
2), that is to say the frequency of the AD genotype would be 3 in 1000. This
calculation holds good if the alleles are statistically independent. For the
alleles to be statistically independent the given population has to be in
Hardy-Weinberg equilibrium.[119]Most
critics, however, believe that most populations are not in Hardy-Weinberg
equilibrium because of non-random breeding in the population.[120]
Having calculated the
frequency of genotype, the frequency of the entire DNA profile has to be
calculated (multilocus genotype). This is done by multiplying the frequencies
of the individual genotypes (called the precedent rule). For this again the
frequency of the genotypes has to be statistically independent. If the
genotypes at different loci are statistically independent, the population is
said to be in linkage equilibrium. Here again most critics believe that
populations are not in linkage equilibrium because of non-random breeding.[121]
The 1992 NRC report [122]on
DNA technology acknowledged that there are controversies regarding population
structure and proposed using the ‘ceiling principle’ to estimate match frequencies.
It recommended ‘random samples of 100 persons should be drawn from each of 15-20
populations, each represent a group relatively homogenous genetically; the
largest frequency in any of these population or 5%, which is larger should be
taken as the ceiling frequency’.[123]
Soon this approach came under criticism and in 1996 the National Research
Council, acknowledging the criticism, proposed that ‘profile frequencies be
assigned a “confidence interval” formed by multiplying and dividing the
frequency by 10’.[124]However
the controversies surrounding estimation of frequency of match remain far from
settled.
A stringent inquiry into fallibility of DNA profiling
was first seen in the US in People v
Castro.[125]Castro
was accused of two counts of murder and the investigators obtained blood
samples from Castro’s watch which he claimed was his own blood. Judge Sheindlin
ruled that the DNA evidence was inadmissible for proving that the stain on the
watch was that of the victim but was admissible if it was to show that the DNA
profile on the watch was not that of the victim. The court held that ‘methods
for determining exclusion’ were ‘less complex and more reliable than those used
to show inclusion’.[126]Though
the court ruled that DNA tests when properly performed were reliable and
admissible under Frye rule, the
evidence was not admissible to determine inclusion because of various
deficiencies in the testing carried out by Lifecode laboratory. The court was
critical of its ‘use of contaminated probes, the absence of laboratory controls
and for the inconsistency between its methods for declaring match between
samples and declaring a measured match in population data base’.[127]
In England and Wales the Forensic Science Regulator
commissioned a review on the interpretation of DNA evidence and published the
review in December 2012.[128]The
aim of the report was to come up with principles that can be used in future to
help generate a ‘unified interpretation and reporting policy’.[129]The
report advocated the removal of the ‘artificial divide’ between conventional
DNA and low template DNA. The report agreed with the ‘Caddy Review’[130]that
the commercial term Low Copy Number (LCN) which is confusing, should be
replaced by a more generic term- Low Template DNA analysis (LT-DNA). It
proposed that the threshold of 100-200 picogram of genetic material available
should not be used to differentiate conventional DNA and low level target
profiles because DNA levels above this threshold levels may also have
stochastic effects that are seen in low concentration DNA profiles.
The report also
highlighted the lack of uniformity of validation methods among the police,
government and private laboratories that provide DNA profiling. There is also
no framework for monitoring and assessing the techniques used and the results
produced by these laboratories. A study by John Butler
in the US showed that ‘random match probability estimates (from the same
electropherogram) varied by ten orders of magnitude between different
suppliers’.[131]The
problem of contamination of consumables and the lack of standard DNA database
is also highlighted in the report. In conclusion the review states that at
present there is no standard interpretation method for complex DNA analysis.
(5)Admissibility of DNA Evidence
The court in R
v Hoey[132]in
rejecting admissibility of DNA profiling evidence was critical of the poor
handling of samples to protect the ‘ integrity and freedom from possible
contamination’ as well as poor bagging, labelling and recording of the item.[133]The
court was of the opinion that without a proper system in place for such
protection, the evidence generated by forensic testing will be of no probative
value to the tribunal.[134]The
court was also of the view that LCN analysis ‘has not been “validated” by the
international scientific community’.[135]Hot
on the heel of this judgement was a review of the science of Low Template DNA
analysis by Caddy et al. which concluded that the science supporting LTDNA was
sound.[136]
In R v Reed
and Reed[137]the
court concluded that DNA profiling using LCN technique for samples above the
stochastic threshold was reliable and is admissible unless and until further
new discovery shows otherwise. The court was not certain what the stochastic
level was but scientific evidence points to a level somewhere between 100-200 picogram.[138]As
for quantity between 100-200 picogram, disagreement would be expected but such
a situation would be rare[139]and
the scientific disagreement will be ‘resolved as science of DNA profiling
develops’. [140]The
court however made no observation on the admissibility of evidence when the
evidence was generated with quantities of DNA below 100 picogram.[141]The
court in Reed and Reed also did not
specify whether the stochastic threshold level referred to the total DNA
available in a mixture sample or that of the minor profile.[142]In
R v C [2010] EWCA Crim 2578 the court was of
the opinion that reliability is the issue not the quantity.[143]However,
the quantity of DNA is not the only factor that affects the stochastic
threshold level because this threshold level depends on other factors such as
sample degradation and the ‘presences of other materials that can affect
profiling chemistry (termed inhibitors)’.[144]
The court in R
v Dlugosz[145]acknowledged,
quoting a paper by Dror and Hampikian[146],
that there is no objective standard for the interpretation of mixed DNA
profiling. The court admitted DNA evidence in the absence of statistical random
match probability of the profiles. The court held that an evaluative opinion of
the analysis of a mixed profile based on experience of the expert is admissible
as long as the jury is informed of the limitation of the evidence that is
provided.
The validity of the theory underlying the science of
DNA analysis is universally accepted as reliable. However the validity of the
techniques applying this theory has given rise to concern, especially its
introduction into the courts. The problem of ‘poorly defined rules for
declaring a match, experiments without controls; contaminated probes and
samples; and sloppy interpretation of autoradiograms’ continues to give rise to
doubts about reliability of such evidence.[147]There
have been reports in the US of abuse of scientific evidence such as over
stating frequency of genetic match, reporting inconclusive results as conclusive,
failing to report conflicting results and reporting improbable results, among
others, which has added to these concerns.[148]The
courts in England and Wales have not laid down any uniform principles for
admissibility of DNA evidence, except for saying that DNA profiling is reliable
and evaluative opinion of DNA analysis based on experience of the expert is
admissible as long as the jury is informed of the limitation of the evidence
that is provided.
DNA profiling and its application in criminal
investigations is therefore not fool proof. The courts have to take heed of
these flaws in DNA evidence when admitting such evidence in criminal trials.
(B)Fingerprinting – A Gold Standard or A Junk Science?
Fingerprinting is based on the presences of friction
ridges on the volar surface of the fingers and hand. These ridges form patterns
in the form of arches, whorls and loops which persist throughout life unless
there is skin loss or scarring. The arrangement of ridges or patterns varies
from finger to finger and between individuals. It is believed that there are no
two individuals with the same ridge patterns in the world. This uniqueness is
the basis of fingerprinting. The analysis of these patterns is variously
described as ‘finger ridge analysis, fingerprint comparison, fingerprint
identification or individualisation’.[149]
Fingerprints taken from the crime scene are usually
latent and various techniques are used to make these latent prints visible by
fingerprint experts, who by training and experience are skilled to perform
this. The prints are then compared with those obtained from the suspect or from
a database. The finger ridge pattern may be unique to an individual but the
fingerprint examiners work with impressions which at times are partial
impressions. The NAS report 2009 argues that,
‘Uniqueness
does not guarantee that prints from two different people are always
sufficiently different that they cannot be confused or that the two impressions
made by the same finger will be sufficiently similar to be discerned as coming
from the same source. The impression left by a given finger will be differ
every time, because of inevitable variations in pressure, which change the
degree of contact between each part of the ridge structure and the impression
medium. None of these variabilities – of feature across a population of fingers
or of repeated impressions left by the same finger- has been characterised,
quantified, or compared’.[150]
The friction skin ridges are three dimensional
whereas the prints or impressions obtained are two dimensional and many details
of the unique friction skin ridges do not survive this transition, where
pressure with which the impression is made and the surfaces from which the
impression are taken can produce distortions which further affects the quality
of the print.[151]
Fingerprinting identification has in the past been
regarded as a ‘gold standard’[152]and
has been accepted without challenge in the US courts for more than 100 years,
which has led everyone, including the courts to believe that it is infallible.[153]The
fingerprinting community believes that the technique when properly applied has
an error rate that approaches zero.[154]However
the NAS report says that ‘there is limited information about the accuracy and
reliability of friction ridge analyses, claims that these analyses have zero
error rates are not scientifically plausible’.[155]Jennifer
Mnookin, while highlighting the weaknesses of fingerprinting evidence, sums it
up by saying that;
… [g]iven the general lack of validity
testing for fingerprinting; the relative dearth of difficult proficiency tests;
the lack of a statistically valid model of fingerprinting; and the lack
validated standards for declaring a match, such claims of absolute, certain
confidence in identification are unjustified….Therefore, in order to pass
scrutiny under Daubert,
fingerprinting experts should exhibit a greater degree of epistemological
humility. Claims of “absolute” and “positive” identification should be replaced
by more modest claims about the meaning and significance of a match.[156]
Simon Cole has pinpointed some of the problems with
fingerprinting evidence;
·
Fingerprinting has never been scientifically tested
·
Fingerprinting community has not yet articulated what
constitutes a fingerprinting forensic match
·
Fingerprinting experts are self-regulated, but weakly
self-regulated
·
Forensic fingerprint identification claims an
exaggerated degree of scientific certainty
·
Fingerprinting identification has enjoyed enormous
freedom from scrutiny[157]
He concludes by saying that ‘fingerprinting has
constructed a perfect rhetorical system, in which the actual accuracy of the technique
is irrelevant’ and that the ‘conceptual framework in which fingerprint
examiners operate is “junk science”’. This framework he contends is not
suitable for the post Daubert era.[158]Cole
is right when he says that it is ‘becoming increasing difficult to find a
scholar who will argue that latent print individualization is valid’.[159]
In the UK too there has been a ‘blanket reception’ of
fingerprinting evidence.[160] The
legal community in the UK woke up to the fallibility of fingerprinting evidence
in 2007, when the Scottish Government, acceded to calls for a public inquiry,
following the evidence of two defence fingerprinting experts in McKie’s trial[161]
that the Scottish Criminal Record Office had misidentified fingerprints. The
Fingerprint Report 2011[162]acknowledged
that there is ‘no evidence … to suggest that fingerprinting evidence as a class
is inherently reliable’, there is also
‘no basis for a claim of infallibility ’ and ‘[i]t is opinion evidence and
where appropriate, it should be subject to robust scrutiny and challenge’.[163]
The report goes on to say that ‘ [t]he legal profession, judges and juries need
to be alert to the subjective nature of fingerprinting evidence and to the
other factors of relevance to the assessment of the opinion of a fingerprinting
examiner in order to consider this evidence on merit’.[164]
However as the NAS
report suggest, this scrutiny by itself is not going to ‘cure the infirmities
of the forensic science community’ because of the ‘limitations of the judicial
system’ and the problems with the forensics sciences itself.[165]Despite
growing academic claims of lack of evidence of reliability of fingerprinting
evidence, the courts in UK continue to admit fingerprinting evidence without
any serious challenge. In R v Buckley
[1999] 163 JP 561 an unsuccessful challenge to admissibility of fingerprinting
evidence was made. The Court of Appeal laid down grounds on which the court
uses its discretion to admit or exclude fingerprinting evidence. The court held
that, admissibility of such evidence will depend on all circumstances of the
case, in particular;
(i) the experience and expertise of the
witness;
(ii) the number of similar ridge
characteristics;
(iii) whether there are dissimilar
characteristics;
(iv) the size
of the print relied on, in that the same number of similar ridge
characteristics may be more compelling
in a fragment of print than in an entire print; and
(v) the quality and clarity of the print on
the item relied on, which may involve, for example, consideration of possible
injury to the person who left the print, as well as factors such as smearing or
contamination.[166]
The most recent and only other challenge to fingerprint evidence was in R v Smith [2011] 2 Cr App R 16. The
Court of Appeal quashed the appellant’s conviction of murder after hearing
conflicting fingerprint evidence from experienced experts. At trial, the prosecution’s
fingerprint evidence could not be effectively challenged because the Crown had
indicated that it would challenge the qualification of the defence fingerprint
expert, because her name was not on Home Office register of fingerprint
experts. In UK only the names of UK police personnel who have completed the
requisite training appear on the register. The court held that it is not for
police but for the court to decide who is a competent witness.[167]The
court was also critical of the unprecedented monopoly held by the police force
and lamented the lack of defence access to independent experts in this field.
The court also highlighted the poor quality fingerprint reports prepared by the
prosecution. There were no detailed notes kept of the examination and no
reasons stipulated for the conclusion reached.[168]There
court was of the opinion that fingerprint evidence should be presented to the
court and the jury using modern presentation methods so that the jury can make
an informed decision.[169]In
conclusion the court recommended that fingerprinting be the subject of further,
wider investigations.[170]
In response to the court’s recommendation in R v Smith, the Forensic Science Regulator, Andrew Rennison and Gary
Pugh, the chair of the Fingerprint Quality Standards Specialist Group, released
a paper, ‘Developing a Quality Standard
for Fingerprint Examination’, in December 2011.[171]The
paper reiterated that: ‘ …it is the accuracy of fingerprinting examiners rather
than the uniqueness or persistence of fingerprints that is the foundational
issue in the reliable provision of fingerprinting evidence’.[172]The
report also stressed that:
·
Fingerprinting is not a science
·
There is a need to recognise the risk of human error
·
There is a need for high level of individual
competence
·
That fingerprinting evidence must be objective and
impartial
·
That the methods must be valid[173]
Such quality frameworks are to some extent available
in other areas of forensic science such as DNA profiling but are still missing
in the fingerprinting discipline, which raises questions of reliability of the
fingerprinting evidence presented to the courts. With the recent challenge to
fingerprinting evidence in Smith, the
release of the fingerprinting report (Scotland 2011) and the Forensic Science
Regulator report (December 2011), more challenges to such evidence can be
expected in future. The days of blanket reception of such evidence should be
over but it is left to be seen if the courts, judges and lawyers are prepared
to take up the challenge.
(C)Bite-Mark Evidence
Bite-mark analysis involves the ‘detection,
recognition, description, and comparison’ of bite-marks.[174]Forensic
odontology was originally used for identification of victims of mass disasters.
However in the 1970’s forensic odontologist extended their expertise to provide
testimony in criminal proceedings. Bite-marks found on the victim or the
suspect and sometimes on other objects are compared or matched to the dentition
of the person who inflicted the bite. A ‘match’ would imply that human
dentition and the imprint that it creates is unique and that it excludes the
possibility of any other individual’s dentition could have created the same
imprint. However to date there is no universal agreement that human dentition
is unique.[175]
Let alone the uniqueness of human dentition, bite-marks are usually made by only
a limited number teeth. Often the bite mark leaves a very unsatisfactory
impression on the skin or food material and the impression is often
characterised by ‘shrinkage’ and distortion’.[176]
In fact scholars argue that ‘no data that could permit
forensic scientist to offer identification “to the exclusion of all others in
the world” exists and they are unlikely to come into being in the foreseeable
future. Such testimony is speculative and improper, both scientifically and
legally’.[177]According
to Adam ‘[b]ite-mark testimony…fails each of the five prongs of the Daubert test’.[178]Giannelli
further dents the validity of bite-mark testimony by saying that given its
background ‘it is critical that bite-mark evidence be challenged’.[179]Testimony
to the weakness of bite-mark evidence is the ‘increasing numbers of wrongful
convictions that are associated at least in part with bite-mark analysis’,
which have come to light after DNA testing.[180]The
strength of bite-mark evidence probably lies in exclusion i.e. a person’s
dentition and bite-mark do not match than for finding a match. [181]Despite
the lack of scientific basis of bite-mark evidence there has been consistent
judicial acceptance of such evidence under Daubert,
Frye and the Federal Rules of
Evidence.[182]
(D)Foot Wear Mark Evidence (Shoe print Evidence)
Impression evidence of footwear at crime scene which
may be latent (not visible to naked eye) or patent (visible) has to be
collected, preserved and enhanced. The quality of such impression obtained will
depend on the ‘experience, training and the scientific knowledge of the scene
investigator as well as the agency’s resources’.[183]After
the impression is analysed, it is compared for individual characteristics with
the characteristics of the suspected source.[184]The
criteria for positive identification depend on the individual laboratory as
well as on the experience of the examiner and the clarity and uniqueness of the
characteristics. There are two types of characteristics that are compared, the
class characteristic which results from manufacture of the shoe (design/size)
and identifying characteristic which results from objects attached to the sole
and damage caused by cuts. The outcome of the comparison is usually declared as
a ‘match or not a match’. However the Scientific Working Group for Shoeprint
and Tire Thread Evidence (SWGTREAD), USA, recommends the following terminology:
·
Identification—a definite conclusion identity;
·
Probably made—a very high degree of association;
·
Could have made—a significant association of multiple class
characteristics;
·
Inconclusive—limited association of some characteristics;
·
Probably did not make—a very high degree of non-association;
·
Elimination—definite exclusion;
Foot wear impression comparison is usually not
assigned any ‘probabilistic or statistical significance.[186]
Footwear print identification is a subjective
analysis, as with other impression/pattern evidence, and it gives rise to observer
errors in the absence of rigorous laboratory standards. Majamaa and Anja[187]
did a survey in which six sets of shoeprint photographs with fictitious crime scenes
were distributed to 34 crime laboratories for analysis. The results showed that
there were considerable differences ‘in the conclusion of identical cases in
the reports from different laboratories’.[188]
According to NAS report, for footwear mark evidence
‘there is no consensus regarding the number of individual characteristic needed
to make a positive identification’ and the committee was not aware of ‘any data
about the variability of class or individual characteristics or about the
validity or reliability of the method’.[189]
In England and Wales the Court of Appeal in R v T[190]made
it clear that experts can give footwear mark evidence in court but they cannot
use likelihood ratios in forming an evaluative opinion. Thomas L J held that;
‘there is not a sufficiently reliable basis for an expert to be able to express
an opinion based on the use of a mathematical formula ’and that the court is
‘satisfied that in the area of footwear evidence, no attempt can realistically
be made in the generality of cases to use a formula to calculate
probabilities’. The practise apparently has no ‘sound basis’.[191]
The court disagreed with the Forensic Science
Regulator which practised such an approach of calculating likelihood ratios for
footwear evidence.[192]The
court preferred the use of words such as, ‘could have made’(the mark),which
would help the jury better understand the nature of evidence, than to use ‘more
“opaque phrases” such as moderate (scientific) support’ which would be more
misleading to the jury, especially use of the word “scientific”.[193]
The court rightly subjected the expert evidence to
strict scrutiny and also admonished the expert for adopting a reporting
protocol which failed to provide a transparent basis for reaching the
conclusions that were made.[194]At
the appeal, the court ruled that in the absence of sufficient and accurate data,
probabilistic calculations are ‘inherently unreliable’ except in the ‘field of
DNA (and possibly other areas where the practice has a firm statistical
base)…’.[195]
This is a timely reminder that, with the exception of DNA evidence which is
supported by established scientific data, forensic evidence such as footwear
mark evidence has reliability issues. Such evidence may have some probative
value but it cannot be used as a basis for conviction in criminal trials till
its reliability can gain more credence.
(E)Ear Print Evidence
Ear-print evidence has been in use in Europe since
1965 but in UK it came into use in 1996. In some jurisdictions, it has in the
past been successfully used for prosecution in crimes without much challenge.[196]
The use of ear-prints has come under criticism because of lack of evidence that
human ears are unique or the ear print impressions obtained are unique. Ears
are malleable three dimensional structures and the two dimensional prints
obtained from the crime scene are affected by several variables, the main being
pressure distortion. The reliability of these prints is further affected by the
lack of standardised protocols for retrieving and analysing these impressions. [197] The
process of retrieval and individualisation is subjective and there is a dearth
of publications and peer review on the subject. Research in this area of
forensic science is on-going. The EU funded Forensic Ear Identification
Research Project (FearID) completed its first project in 2005, which looked
into ways to improve retrieval of ear print impressions, overcoming
difficulties associated with pressure distortions and developing a more
reliable system of matching and classifying ear prints. FearID intends to
continue its research into finding ways to present its data using likelihood
ratios which can be used in the courts.[198]
In the US Courts, ear print evidence does not meet the
admissibility standards. In State v Kunze[199],
the defendant was convicted of murder based on partial latent ear print
evidence. At the Court of Appeal, Morgan J ruled that ear print identification
evidence has yet to gain general acceptance in the relevant scientific,
technical or specialist community and hence fails to meet the requirements of
the Frye test of admissibility.[200]
Morgan J however held that the court does not bar testimony, of lifting and
preparing latent prints and also testimony of the similarities and differences
between the prints obtained, which could easily be evaluated by the jury. A
testimony of non-inclusion i.e. the defendant cannot be excluded as the person
who made the latent prints can be accepted but an opinion of inclusion i.e. the
defendant made or probably made the prints cannot be accepted.[201]
Ear print identification evidence is however
admissible in English law. In R v Dallagher[202]the
defendant was convicted of murder based in part on ear print evidence. The
trail judge in fact directed the jury that they could convict the defendant on
the basis of ear print evidence alone in the absence of other supporting
evidence.[203]
At Appeal the defence counsel introduced evidence from three expert witnesses
who challenged the reliability of ear print evidence. They cited the absence of
empirical research on the uniqueness of individual ears or that of the ear
prints. In fact the prosecution expert agreed that high variability or
uniqueness of ear print was an assumption based on limited experience.[204]Other
issues, such as pliability of the ears and pressure distortion of the
impression, which affect reliability of the prints, were also raised.[205]The
lack of standardised methodology, absence of objective universally accepted
criteria for analysis and comparison of the prints and the subjective nature of
the analysis were also highlighted. The appeal was allowed and a retrial
ordered. The case was however dropped by the Crown in 2004.[206]
In R v Kempster[207]the
defendant had been convicted in2001 for one count of attempted burglary and
three counts of burglary. His first appeal in 2003 was dismissed. An appeal was
heard again in2008 at the recommendation of the Criminal Cases Review
Commission. The court allowed the appeal against conviction on the first count
of burglary and upheld conviction on other three counts. The court held that ‘
[w]e have no doubt that evidence of those experienced in ear print is capable
of being relevant and admissible’ but ‘ [t]he question in each case will be
whether it is probative’.[208]The
court having examined the ear prints found that they ‘do not provide a precise
match’.[209]The
appeal was allowed and the court found the conviction to be unsafe. The experts
from both sides were in agreement that ear prints pose a ‘ different and
difficult problem than fingerprints’ and that the ear can deform under pressure
and the fact that the person pressing the ear on a surface does not remain
motionless leads to distortions of the ear and of the impression left on the
surface.[210]
Although ear print evidence is admissible in English
Courts, both the appeals have cast serious doubts about the reliability of such
evidence. Ear print evidence is admitted in English law since it fulfils the
requirements of the ‘relevancy test’ but obviously the probative value of such
evidence is too low for it to be placed before the jury and should be filtered
out at the beginning of the trial.
A strict application of the Law Commission’s and or
the Daubert criteria for reliability
of expert evidence would show that, except for DNA evidence (which is also not
fool proof), other evidence such as fingerprinting, bite mark, ear print and
footwear mark evidence do not meet most of the criteria. There are obviously many reliability issues
with such expert evidence which is proffered in criminal proceedings. Some evidence
has more probative value while others have very little such value. The problem
is where to draw the line between what is and is not reliable. Reliability test
require some such distinction to be made.
(F)Publication and Peer Review
The Court in Daubert
said that,
publication
(which is but one element of peer review) … does not necessarily correlate with
reliability…[b]ut submission to the scrutiny of the scientific community is a
component of “good science”, in part because it increases the likelihood that
substantive flaws in methodology will be detected’ and that ‘publication (or a
lack thereof) in a peer reviewed journal will be a relevant… consideration in
assessing scientific validity….[211]
On the other hand Richard Smith, the then editor of
British Medical Journal (BMJ) had this to say:
…there is something rotten in the state of
scientific publishing and … we need radical reform. The problem with peer
review is that we have good evidence on its deficiencies and poor evidence on
its benefits. We know that it is expensive, slow, open to abuse, possibly
anti-innovatory, and unable to detect fraud. We also know that the published
papers that emerge from the process are often grossly deficient.[212]
Standards to assure quality of research produced have
long existed. There are published standards and checklists, in addition there
is peer review for funding and there are supervisory advisory groups and
committees which review the whole process. At the end of it all this, research
goes through publication peer review.[213]
As with all standards and procedures, there often
exists the question of whether these standards are effectively applied to the
processes to ensure quality. It is often assumed that publication peer review
is a guarantee that the research fulfils the criterion of a quality product. It
is supposed to guard against ‘mediocrity, bias, and deception – both conscious
and unconscious- in research’.[214]
However a review of literature on peer review by
Grayson revealed startling inadequacies in the system. It is unable to ‘detect
deliberate deception or bias when, as is probable, the author has gone to great
length to massage or fabricate plausible data and interpretation’.[215]The
relationship between research and industry has allowed the industry to dictate
and guide research which is in their favour. The industry sponsors control not
only the research process but also the ultimate publication of results that
favour them.[216]In
summary Grayson concludes that biomedical peer review is:
·
Slow - because of sheer volume of paper that unpaid
editors and reviewers have to handle leading to years of delay
·
Expensive – In terms of academic and editorial time
consumed
·
Biased – including factors such as Intellectual bias,
seniority of authors, institutional affiliations, nationality, language,
geographical location, conservatism and discrimination against dissenting
opinion.
·
Abused – including factors such as unconscious bias
merging into conscious bias, special pleadings, witch hunting, promotion
favouring colleges and protégés, anonymity of reviewers, favouring of articles
on ‘hot topics’ and limited pool of reviewers especially in sub-specialities
·
Incompetent – sloppy practices - senior members of
editorial board who act as reviewers not having training in epidemiology and
statistics - and the poor quality of some publications suggests incompetence of
peer reviewers who know less than authors due to highly specialised modern
science
·
Unable to detect fraud – Inability of peer review to
detect gross scientific fraud with potentially dangerous consequences - fraud generated by authors to improve
personal standing in a very competitive scientific arena - senior honorary authorship favours the junior
researchers chance of publication and it makes judging the true quality of the
research more difficulty, the outstanding example was that of Malcolm Pearce in
the 1990’s[217]
Armstrong highlights other problems with peer review.
His research has found that, peer review is, unreliable, uninformative to
readers, weak on quality, biased against replication, ignores usefulness and
rejects surprising finding. His observations are that to get ones research
published one has to avoid ‘examining important problems, challenging existing
beliefs, obtaining surprising results, using simple method, providing disclosure
or writing clearly’.[218]
The role of peer review is to evaluate manuscripts to
either accept for publication or to reject them and not to ascertain their
authenticity. The process does not ‘ensure scientific authenticity,
accountability or authority’.[219]
There appears to be ‘no universal, objective and infallible procedures,
standards and goals’ in the review process.[220]However
attempts are being made to improve the quality of research that is published by
standardising the reporting of research. In the past (even today) most
publications used the IMRAD (Introduction, Methods, Results, and
Discussion/Conclusion) approach of reporting. Now several new formats have been
proposed depending on the methodology of the research or the specific research
designs. These include:
·
Consort- Consolidated Standards for Reporting Trials
(22 item checklist)
·
Quorom – Quality of Reporting of Meta-Analysis (17
item checklist)
·
Moose – Meta-Analysis of Observational Studies in
Epidemiology (35 item checklist)
·
Trend – Transparent Reporting of Evaluation with Non-randomised
Designs (22 item checklist)
·
Stard – Standards for Reporting of Diagnostic Accuracy
( 25 item checklist)[221]
Checklists in research publication can improve the
quality of the research but these publication standards need to be widely
accepted by the publication fraternity.
The ICMJE (International Committee of Medical Journal
Editors), a group of editors of leading biomedical journals has since 1978 (the
Vancouver Group), been working to develop editorial polices and guidelines.
Despite having such policies and guidelines, these leading medical journals
still have problems ‘identifying instances of gross errors and fraud, let alone
systematic bias, omissions and exaggerations’.[222]
Given these inherent weakness in the publication and
the peer review process, shouldn’t the
courts stop ‘idealizing images of science’ and ask whether they should admit
evidence based on publication which are redundant, has no full disclosure of conflict, and those
which are corporate sponsored or corporate influenced.[223] Without doubt there is a need for a more
intense scrutiny of the published and peer reviewed evidence that is presented
to the courts. What is reliable remains the perennial question?
The criminal justice system is heavily reliant on
forensic and medical expert testimony to resolve conflicts expeditiously and
with finality. However, can expert testimony based on science; forensic
science, forensic techniques and or technical expertise fulfil such legal needs.
Does such testimony meet the Law Commission’s reliability criteria? The theory
underlying the science of DNA obviously does but validity of the techniques
applying this theory has given rise to concerns. Reliability of such evidence
has been tainted by lack of laboratory controls, poorly defined rules,
contamination and sloppy interpretations besides others. The reliability of
fingerprinting evidence, which was once considered infallible, is now been
questioned. Even the court in R v Smith
recommended that fingerprinting be the subject of
further, wider investigations.
Currently other evidence such as that of bite-marks, footwear prints and ear
prints, appears to have little or no quality framework in place for it to be of
much probative value. However efforts are in place to build such quality
frameworks. In recent years even evidence based on publication and peer review has
come under criticism.
Considering these reliability issues it becomes
difficulty to draw a fine line to distinguish reliable from not so reliable evidence
which is often necessary to formulate standard reliability tests. The
adversarial process and judicial review are not sufficient to overcome these
problems. Further reforms in these areas are necessary and tightening the rules
of admissibility of expert evidence with introduction of statutory reliability
tests is unlikely to resolve these problems as has been seen in the US. Despite
the existence of such reliability issues, medical and forensic expert testimony
has a tendency to be given undue weight in the courts and evidence that lacks
scientific basis or that based on errors, fraud, bias, omissions and
exaggerations can lead to miscarriages of justice.
(4) Miscarriages of Justice
Miscarriages of justice are often blamed on the
laissez-faire approach to admissibility of expert testimony. Are miscarriages
of justice the result of unreliable expert testimony alone? An analysis of some
recent miscarriages of justice may provide an insight.
Miscarriage of justice has been said to have occurred
when someone is treated by the state in ‘breach of their right’ as a result of
a ‘deficient process’ or ‘misapplication of law’ and when ‘factual
justification’ for the punishment does not exist.[224]The
causes of wrongful convictions leading to miscarriages of justice have been
quite extensively studied and reported in the western literature. Some of these
include:
·
Mistaken eye witness identification ( about 75% of the
cases)
·
False confessions ( about 14-25 % of the cases)
·
Tunnel vision- Selective ‘filtering’ of evidence by
investigating officers, scientists and lawyers to ‘build’ a case.
·
Unreliable informant testimony/ falsification of
evidence
·
Prosecutorial misconduct – nondisclosure of relevant
evidence
·
Inadequate defence representation – incompetence of
defence lawyers – lack of funds
·
Unreliable expert testimony and imperfect forensic
science[225]
The exact extent to which unreliable expert evidence
contributes to wrongful convictions is difficult to gauge but based on DNA
exonerations, forensic errors have been implicated in about 66% of the cases
and fraud or tainted evidence in about 31% of the cases.[226]
Evidently it plays an important role in the miscarriages of justice. It
attracts significant media and political attention which is often followed by a
perceived need to reform the existing criminal justice system.
(A)The Guildford Four and The Maguires
In the late 1980’s and early
1990’s a number of high profile miscarriages of justice came to light including
the Birmingham Six, the Guildford Four, the Maguires, the Tottenham Three , the
Cardiff Three and the Taylor Sisters. Two public inquiries were established to
improve the English justice system and restore public confidence.[227]
The May Inquiry (1994) into the circumstances surrounding the miscarriage of
justice in relation to the Guildford Four and Maguire Seven raised considerable
doubts about the forensic science evidence.
The Guildford Four had
spent 14 years in prison before their convictions for Guildford and Woolwich
bombings were set aside by the Court of Appeal in 1989. The Maguires had their
convictions for possession of explosives set aside in 1990 having served their
full sentence.
The May Inquiry[228]
concluded that the convictions were unsafe because of several reasons, one of
the main being the prosecution’s contention that traces of nitro-glycerine
found on the hands of the defendants and on the gloves in the house had come
from handling of nitro-glycerine (NG) compound and could not have been from
innocent contamination.[229]
However tests conducted by experts especially for the inquiry showed that
innocent contamination by touching or a use of a towel could occur.[230]Furthermore
late in the trial fresh evidence became available that the TLC method used for
identification of NG could not distinguish NG from pentaerythritol
tetranitrate(PETN).The prosecution assured the defence that if the experts were
recalled they would be able to exclude PETN by the same test[231]which
was in fact not so. There were tests available to distinguish between NG and
PETN.[232]
The defence counsels were ‘seriously misled’, leading them to make ‘unnecessary
concessions which the judge then accepted’.[233]A
second test on the Maguire samples was carried out using different solvents but
this was not revealed at the trail.[234]
The
May report concluded that there was a failure to disclose relevant information
and that the ‘scientists … imperfectly understood their duties as forensic
scientists and witnesses’. The report was critical of, the failure of the
prosecution to disclose expert notebooks[235]and
weaknesses in the conduct of the trial by Donaldson J.[236]
. The report ‘apportioned most of the responsibility to individuals,
particularly the individual scientist’.[237]This
individual failing according to the report was not due to weakness or fault of
the criminal justice system and no rules in the system could provide complete
protection from these failings.[238]A
reliability test for admission of expert evidence could not have prevented these
wrongful convictions, when there is lack of judicial vigilance and failure to
adhere to proper investigative and prosecutorial procedures.
(B)Sally Clark and Angela Cannings
In
R v Sally Clark[239],
Sally Clark had been convicted of murdering her two infant sons, Harry and
Christopher. Her first appeal against the conviction was dismissed. A
subsequent appeal was successful on two grounds. The first, on grounds that the
Crown pathologist, Dr Williams had not revealed crucial microbiological test on
Harry which suggested he might have died from natural causes and secondly the
statistical evidence provided by Professor Meadow on the chances of two infants
deaths in the same family was flawed.
The
Court of Appeal admitted that, taking all medical evidence into consideration,
this was a difficult case. There was considerable disagreement between experts
as to whether to classify the deaths as due to unnatural causes or uncertain
causes.[240]The
court put into perspective, the failings on the part of the prosecution
witness, Dr Williams, by agreeing with a report by Professor Byard, a forensic
science specialist from Australia who had written to the appellants solicitors:
Standard protocols
were not followed and essential steps such as routine dissection and histology
were omitted which prevented verification of alleged autopsy findings. As well,
a number of potentially important diagnoses and conclusions were altered over
time. For example, Christopher's initial cause of death of lower respiratory
tract infection was withdrawn; observations of no significant haemorrhage
within his lungs were changed to marked haemorrhage, …. The finding of retinal
haemorrhages in Harry which was vital to sustain the diagnosis of shaken-impact
syndrome was altered to no haemorrhage, brain lacerations were found to
represent post-mortem artefact, swelling of the spinal cord was not present and
bruising of paraspinal tissues was also not able to be substantiated. This is
not a unique situation with statements in the
literature in recent years that “investigations into the pathology and
circumstances of sudden infant death are often scanty and inexpert” with
significant omissions being documented when cases were audited. The Clark
brothers demonstrate difficulties that may arise if cases are not fully
investigated with all of the results being clearly summarised and discussed in
the autopsy report. Trying to clarify findings, diagnoses and circumstances of
death at a later stage may simply not be feasible due to a wide variety of
possibilities other than inflicted injury.[241]
Furthermore Dr Williams had failed to
reveal vital information regarding microbiological and virology reports of
Harry which were all along with him but not revealed at trial or at the first
appeal. Various samples collected at post-mortem tested positive for
Staphylococcus which could have accounted for death from natural causes and
could have compelled the jury to reach a different verdict, had all the
information been available to them. [242]This
led the Court to view the verdict as unsafe. The court also held the view that
if ‘Harry’s death may have been from natural causes’ then ‘no safe conclusion
could be reached that Christopher was killed unnaturally’.[243]
The court was also very critical of
Professor Meadow’s statistical evidence which probably had an impact on the
jury. He was asked of the risk of sudden infant deaths (SIDS) in a family. He
quoted, a figure of 1 in 8543 for chance of a single SIDS in a family, from the
Confidential Enquiry into Sudden Death in Infancy (CESDI). He quoted the risk
of two deaths in the same family from SIDS as 1 in 73 million (a figure he
obtained by multiplying 1 in 8543 by 1 in 8543).[244]
He went on say that the chances of it happening in England and Wales was once
every 100 years.[245]The
evidence was later proven to be wrong. The court was surprised that there was
no objection from the defence and that the evidence was put before the jury.
The case highlights
misconduct and serious non-disclosure on the part of Dr Williams,
misinformation by Professor Meadow, poor scientific and forensic methodology
overall and lack of vigilance on the part of the defence and the court. Arguably
standard common law test should have served to exclude Prof. Meadow’s evidence
which was outside his field of expertise. It is doubtful that a reliability
test for admission of expert evidence would have prevented such failings.
Science cannot provide simple answers to complex scientific questions posed by
deaths related to SIDS.
In R v Cannings[246], Angela Cannings was convicted for the
murder of her two infants, Mathew and Jason, by smothering. Angela had four
children, three of them in died in infancy. Three of the four children suffered
from Acute Life-Threating Events (ATLEs) of who two died and one other child
survived.
The Crown’s case was that
Angela had smothered her two infants to death and the death of another child
and three ALTEs in the family formed an ‘overall pattern’ of harm to the
infants by smothering. The appellant who had no personality or psychotic
disorder, and was described as a good and loving mother, denied hurting her
children. Her contention was that the infants died from natural causes, which
though unexplained, fell in the category of deaths known as Sudden Infant Death
Syndrome (SIDS).
The Court of Appeal’s
analysis of the evidence before it showed that there was not a ‘single piece of
evidence conclusive of guilt’.[247]The
Crowns case was exclusively based on ‘specialist evidence about the conclusion
to be drawn from the history of three infant deaths and further ALTEs in the
same family’.[248]
The Crowns first witness
Professor Meadow was particularly concerned by three infant deaths in the same
family. His testimony was that where no natural cause of death has been found,
the fact that the child was well before he died, and the fact that three deaths occurred under
similar circumstances in a family are features of a clinical diagnosis of
smothering, although it may be a condition that is yet to be ‘understood or
described’ by doctors.[249]Dr
Ward Platt when asked about the Mathew’s and Jason’s death opined that earlier
admission of them both to hospital ‘was highly suggestive of a baby who had been
killed’.[250]The other evidence relied
upon by the prosecution was the ‘pattern of events’ of six instances where all
four children suffered serious ALTEs or death when in the sole care of the
mother.[251]
The defence argued that
there was evidence that unexpected natural death within the same family is not
a rare event and studies have shown that three infant deaths from natural
causes can take place in the same family.[252]Evidence
of multiple infant deaths in the extended Cannings family suggestive of a
genetic link was adduced by the defence.[253]Furthermore
there was no physical evidence to support the allegations of inflicting harm to
cause violent death in the infants.[254]
The Court of Appeal was
critical of ‘dogma’ on the part of the prosecution witnesses for introducing
evidence which has no scientific basis.
The court held that in view of the fact that ‘the fundamental basis of
the Crown’s case, based on extreme rarity of three separate infant deaths in
the same family and the pattern of events in this particular family …’ has been undermined and there is
‘persuasive fresh evidence, which was not before the jury’, the conviction
cannot be safe.[255]The
court went on to say that in a criminal case murder has to be proved and that a
high probability of guilt will not suffice.[256]
The Court of Appeal in Cannings acknowledged that the burden of
proof for criminal conviction was not met. It was apparent that fallible
scientific evidence by eminent experts tends to be readily accepted. It raises
the question of why such evidence is accepted in spite of the inherent weaknesses
that were exposed in Clark and Cannings. The answer according to Adam
Wilson is social utility.[257]Despite
any reliability test, social utility would require admission of testimony which
may be potentially unreliable. The bulk of wrongful convictions are attributed
to mistaken eye witness identification (75%) and false confessions (14% to 25%).[258]Unreliable
expert testimony and imperfect forensic science forms a small segment of this
spectrum, though it attracts the most attention especially in high profile
cases. From the social context such evidence remains more favourable for
securing a conviction in the absence of a guilty plea.[259]Failure
to prosecute in the face of potentially flawed scientific evidence may lead to
failure to ‘protect vulnerable members of society’.[260]
An analysis of the recent
miscarriages of justice reveals that the failures were not due to inherent
weaknesses of the forensics or medical sciences alone but also due to
individual failings. Some of these failings that stand out are;
·
Prosecutorial impropriety
·
Lack of judicial vigilance
·
Unskilled defence counsel (who are
often poorly funded)
·
Failure of expert witness to adhere to
established protocols, improper documentation and dishonesty on the part of the
expert witness
·
Undue weight been given to flawed
expert testimony
A reliability test in such circumstances could
not have prevented miscarriages of justice but would in fact reduce pragmatism
which is in the interest of social utility. Notwithstanding such weaknesses in
the criminal justice system, problems of reliability of expert scientific/forensic
evidence on which the system is heavily reliant, need to be addressed.
(5) Addressing the Problems of Reliability of Expert Scientific Evidence
There is a fundamental difference between science
and law. Though both seek the truth, the fundamental objective of law is
justice which requires a ‘clear decision be made in a reasonable and limited
amount of time’.[261]Daubert’s attempt to resolve the
conflict between ‘legal truth’ and ‘scientific truth’ by only admitting
scientific truth which will assist the trier of fact to make a just
decision(legal truth) has not lived up to its expectation. ‘Criticism of Daubert and its progeny in state courts
has been rampant…’ and ‘[a]cademic practitioners and trial judges all have
complained that the standard is unclear, is difficult to interpret, leads to
inconsistent results… and is confusing to the jury’.[262]Tightening
the rules on the admissibility of scientific/forensic evidence is unlikely to
resolve the perennial problem of unreliability of such evidence where Daubert appears to have failed. The
weaknesses have to be addressed in some other way.
(A)Forensic Science Reforms
Though the contribution, of unreliable expert
testimony and imperfect forensic science, to the overall causes of wrongful
convictions is small, DNA based exonerations show that forensic errors play a
significant role in miscarriages of justice.[263]
Improving the reliability of scientific/forensic testimony by reforming the forensic
institutional structure is needed.[264]Forensic
science has been plagued by accusation of ‘sloppy, biased fraudulent work’,
‘pro-prosecution bias’, ‘police investigators and forensic scientist bias’
(both conscious and unconscious), ‘error and false interpretation of legitimate
results’ among others.[265]In
fact there have been accusations that ‘some forensic science expert witnesses
are in a position where they can manufacture evidence merely by wishing it into
existence, and evidence suggests that some of them have done precisely that’.[266]
The NAS report reiterated that one of the problem with
admission of forensic evidence in criminal trials was the ‘degree to which
methodology relies on human interpretation that is susceptible to bias, human
error, or a lack of operational procedures and quality control of standards’.[267]The
report suggested setting up of a cross-disciplinary, independent federal
agency- National Institute of forensic Science- to addresses these weaknesses
in forensic sciences.[268]It
emphasized the need for standard reporting[269],
standardizing operating procedures to reduce bias and human errors,[270]laboratory
accreditation and professional certification[271]and
quality control procedures.[272]
Conventional forensic ‘individualization’ sciences
such as fingerprinting, dentition, writing, firearm, footwear, ear prints and
bite marks, lack scientific basis that other science such as DNA printing
offers. In each of these disciplines ‘… little rigorous, systemic research has
been done to validate the discipline’s basic premises and technique, and in
each area there is no evident reason why research would not be feasible’.[273]
In England and Wales, the forensic service providers
included the Police Laboratories, the Forensic Science Service (FSS, which had
virtual monopoly in the 1980’s) and some private forensic science providers (FSPs),
the largest of which is LGC Forensics. The FSS was government owned
contract-operated and provided services to the police force and the Crown
Prosecution Service besides others.[274]The
Forensic Science Regulator (FSR), a public appointee, ensures the quality standards.
It has introduced accreditation to international standards (ISO 17025) to
ensure quality of the services. The standards assess the organisational
competence, individual competence as well as validity of methods, objectivity
and impartiality.[275]The
standards however do not guarantee quality and it does not cover ‘complex
interpretation of results and presentation of evidence in court’.[276]
The private providers that provide service to the government are subjected to
these standards but the police laboratories are not. The impartiality of the
police laboratories and the private service providers who did work for the
government remains questionable. To overcome this problem the FSR has provided
draft Codes of Practice and Conduct for forensic science providers and
practitioners in the criminal justice system, in 2010.[277]The
FSR however has no statutory powers to enforce compliance of its regulatory
framework.
Besides reforms in forensic science, another
important area which will benefit from reforms is forensic science research and
education. However in England and Wales the state of affairs in this area
remains ‘lamentable’.[278]
Few universities have significant research output in forensic science, partly
due to lack of government funding. The US House and Senate Democrats have
proposed legislation, ‘The Forensic Science and Standards Act of 2012, S. 3378’
which will provide 300 million dollars over the next five years for research
and to develop standards in forensic science.[279]In
UK, the government could pass some form of legislation to make funding
available for such purposes in the interest of criminal justice. Not only
research but education in forensic science at the university level needs to be
promoted aggressively.
The FSS has been closed down since 31st
March 2012 but the archives of files and samples have been retained to allow
for review of old cases. There have been concerns that the closure of the FSS
would affect forensic research which was previously carried out with public money
and that commercialization of forensics may undermine the criminal justice
system. [280]On
the other hand the closure of the FSS may eliminate monopoly and decoupling of
crime laboratories from the government agencies may improve the standards of
forensic sciences. What is in store in the future is difficult to predict but without
doubt, there is a definite need for robust reforms to improve forensic sciences
for the good of criminal justice. There is a need to institutionalise reforms
to produce valid results, eliminate bias and incompetence, as well as put in
place adequate internal controls to produce trustworthy evidence, to improve the
reliability of expert testimony. This has successfully been done in clinical
laboratories and surely it can be done in forensic laboratories (unless science
generated for litigation purposes intentionally lacks objectivity). To date the
government has made admirable progress in this area of reforms. It is also
apparent that such reforms alone will not be enough to overcome the reliability
issue without riding the system of problems of partisan testimony.
(B)Court Appointed Experts
The Law Commission
has recommended that the Crown Judges be given the power to appoint independent
experts’… ‘in exceptional cases’,[281]
where this may be of assistance to the court.[282]
This is to be facilitated by the formation of ‘an independent panel of
experienced legal professionals’ chaired by an experienced Circuit Judge.[283].
Criminal Procedure Rule 33.7 allows for the appointment of court-appointed
experts. In the US this proposal has been in existence for over 100 years and
has finally been codified by Rule 706 of the Federal Rules of Evidence.[284] The proposal is
attractive because a court appointed neutral expert, who is compensated by the
court, would reduce the bias that comes with partisan expert witnesses who are
compensated and instructed by other parties. Court appointed experts can
clarify issues for the jury which the partisan experts don’t do when they
tailor their testimony to meet the needs of their clients who pay them. They
can also narrow issues for the court which will save a lot of time.[285] However, experience in the US has shown that there are inherent problems in
appointing such experts. There are difficulties with, locating these experts,
complexities of communication and matters of compensation, besides the problem
of distrust of the expert by the judges and difficulties in finding a neutral
expert.[286]According
to Saks, ‘court appointment of non-party experts’…’has been a resounding
failure everywhere it has been tried’ since ‘either it is rarely used’ or
‘where it had been used more aggressively, it promptly fell into disuse’.[287]
Some of the other arguments
against appointment of neutral experts include:
·
These experts would wield excessive power and
it would be difficult to contradict their testimony which would become
dispositive.
·
They may compromise the impartiality of the
judges and the jury and undermine the adversarial litigation system.
·
They can mislead, be partial, fallible, and
do as bad a job as anyone else.[288]
Samuel Gross disagrees with these critics and believes that appointment
of neutral expert is ‘the most appealing solution to the problem of partisan
expert evidence …’. He believes that the reason why it has persistently failed
is that, the judges do not want to do so due to lack of time and resources,
while the lawyers fear that they will not be able to have control over the
witnesses. Conceptually this is an excellent reform, which can only be implemented
if appointment of neutral witnesses is made mandatory by ‘restricting expert
testimony to party chosen court appointed experts’ and compelling ‘the parties
to secure the appointment of neutral experts’.[289]
In mainland Europe, the inquisitorial system using court appointed
experts is widely used.[290]Problems
in the US such as difficulty with finding or locating experts, difficulty of communication
and matters of compensation have been surmounted in these countries. In France
for example, there is a list of highly qualified experts in all fields which is
prepared by strict scrutiny by a committee. Admission to this list is
considered an honour and these experts provide their services for a fixed fee
which is considerably lower than the legal fee in England.[291]The
defence and the prosecution can call their own expert witnesses but it is
rarely ever done.[292]
The problem of lack of time and resources is not an issue since the system is
already in place in mainland Europe. The possible stumbling block for such a
reform appears to be the resistance from trial lawyers, in part due to personal
or financial interest.[293]
(C)Multi-Disciplinary Advisory Panel (MAP)/ Validation Committee
Gary Edmond has proposed
setting up of a multi-disciplinary advisory panel to overcome the problems of
incriminatory scientific evidence being admitted, often without challenge, in
criminal proceedings.[294]The
composition of the advisory panel would include eminent experts from various
disciplines, including ‘chemistry, the biosciences, epidemiology or medicine,
engineering, statistics or mathematics, experimental psychology, along with
forensic sciences, forensic pathology…, legal practice or the judiciary and an
academic lawyer’; akin to the NAS committee. The eminent committee members
would be selected but not remunerated to contain cost.[295]
The committee would produce
written advice in areas where scientific evidence is problematic. Such areas
would be identified by the judiciary, lawyers or scientific institutions and the
committee will produce written reports based on the state of existing knowledge
from relevant research literature. It would be in the form of a consensus
statement on the reliability or unknown reliability of the technique or
methodology or the science. The report would be in a simple non-technical
language. It will provide useful information to the judiciary, lawyers and the
relevant scientists and will be particularly useful for the defence who may not
have access to such scientific and technical information.[296]
Resistance from key
institutions and organisations in the criminal justice system, the composition
and selection of the panel and the costs involved may be some of the issues to
be addressed before such a reform can be instituted.[297]
Peter Alldridge has proposed
‘an extra-forensic validation committee for scientific methods to be used in
courts’.[298]Such
a committee would decide which evidence based on novel scientific procedure is
sufficiently reliable to be admitted in legal proceeding and which has outlived
its usefulness and is not sufficiently reliable to be admitted as evidence.
Such certification and decertification of scientific procedures should preferably
be done at an international level.[299]
Scholars have been arguing
for a long time that many of the impression identification procedures, such as
fingerprinting, bite-mark, footmark and ear printing, lack necessary quality
framework and scientific reliability and yet evidence based on such techniques
is regularly admitted as expert evidence. It is apparent that a line has to be
drawn somewhere between which scientific/forensic evidence is suitable for
legal purposes and which is not. The judges are often not qualified to make
such decisions and adversarial system has not been able determine what is and
is not reliable evidence. The advice of expert panels or committees would be
helpful in such circumstances
The principle of the
criminal justice system where the defendant is innocent until proven guilty is
undermined when scientific evidence of doubtful accuracy and reliability is put
forward to the jury by partisan witnesses. In the existing unbalanced
adversarial system where the defence is disadvantaged by lack of resources, the
defendant is ‘effectively guilty until proven innocent’.[300]Reforms
are needed to prevent abuse of scientific evidence in this adversarial process.
It can only come in the form of reforms to improve the generation of reliable
scientific evidence, developing a system where partisanship can be eliminated
through court appointed experts and having expert advisory committees to advice
on reliability of the evidence presented to the courts. Reliability test alone cannot
stop dubious scientific evidence from being admitted in the courts.
(6) Conclusion
Sporadically miscarriages of
justice come to light creating a lot of media publicity and the policy makers
respond by calling for reforms. The latest being the recommendations by the Law
Commission for the introduction a statutory reliability test for admissibility
of scientific evidence in criminal proceeding.[301]
This is despite evidence that reliability test in other Common law
jurisdictions has failed to prevent incriminating evidence being adduced in
criminal trials. A review of admissibility standards in other Common law
jurisdictions shows that none of the courts have been able to establish a satisfactory
uniform practical standard which can determine the validity and reliability of scientific/forensic
evidence that is proffered in criminal proceedings.
The nature of expert witness
testimony is such that there can be no cut and dry binary distinction between
reliable -unreliable or admissible - non-admissible evidence. There are no gold
standards in science which is evolving all the time. DNA evidence, latent impression
identification evidence such as fingerprinting, as well as evidence from peer
review publications, is now viewed with scepticism. The greater the inherent weakness
in the science, greater is the chance of expert witnesses over-claiming the
reliability of such evidence. Strengthening the sciences through robust institutional
reforms appears to be the way forward when there appears to be no solution to
the thorny issue of reliability of expert evidence in criminal proceedings.
The call for a statutory
reliability test for admissibility of expert testimony followed a spate of
miscarriages of justice in recent years. However a review of recent
miscarriages of justice reveals that there were many factors responsible for
the wrongful convictions and the major one being individual failings. It is
quite obvious that a reliability test would not have prevented these
injustices. There were elements of prosecutorial improprieties, lack of
judicial vigilance, weak defence, failure of individual expert witnesses, and
partisan dishonest expert testimony. These weaknesses need to be addressed by
improving vigilance on the part of the judiciary and elimination of partisan
expert testimony. Court appointed or neutral non-partisan expert witnesses and
expert panels will go a long way in achieving these aims.
Tightening the rules of
admissibility with statutory reliability test alone is not going achieve the aim
of preventing incriminating scientific evidence from creeping into the
courtroom. Reliability test have been in existence for over a century but they
have not resolved the problems associated with incriminating evidence being
admitted in criminal trials. The current standard common law rules if properly
applied should suffice but it would require more judicial vigilance, as well as
consistency and uniformity in the application of these rules. The further
drawbacks of strict reliability tests are that they reduce judicial pragmatism,
flexibility and discretion which are necessary for the law to develop in tandem
with science which is constantly evolving. Such flexibility, discretion and
pragmatism is in the interest of social utility and justice. What is most
needed, however, is trustworthy evidence which even when weak can be of
substantive probative value. The quality of evidence generated has to be
improved by necessary reforms. Ultimately a balance has to be struck between allowing
juries access to expert testimony to help them reach a conclusion and admitting
expert testimony of sufficient reliability to reduce the likelihood of
miscarriages of justice. Reliability test alone is unlikely to solve all the
problems.
Maybe, what Chief Justice Rehnquist
said 20 years ago, when delivering his dissenting judgement in Daubert, would hold true today, that an
evidentiary reliability test based on scientific validity is not necessary and
that ‘further developments in this important area of law should be left to
future cases’.
Bibliography
Cases
R
v Atkins (Dean) [2009] EWCA
Crim 1876
Attorney-General’s
Reference (No. 2 of 2002) [2003] 1
Cr App R 21
The
Queen v Bonython (1984) 38
SASR 45
R
v Briddick [2001] EWCA
Crim 984
R
v Broughton [2010] EWCA
Crim 549
R
v Buckley [1999] 163 JP 561
R
v C [2010] EWCA Crim 2578
R v Cannings [2004]
EWCA Crim 1; WLR 2607(CA (Crim Div) )
People
v Castro 144 Misc. 2d 956, 545
N.Y.S 2d, 985 (Sup. Ct. 1989)
R
v Chamberlain [1983] 46
A.L.R 493
R
v Ciantar [2005] EWCA Crim 3559
Clark
v Ryan (1960) 103 CLR 486
R
v Clarke [1995] 2 Cr App R 425
R
v Dallagher [2003] 1 Cr.
App. R. 12
Daubert
v Merrell Dow Pharmaceuticals Inc. 509 U.S. 579,(1993)
Davie
v Magistrates of Edinburgh [1953]
S.C. 34
R
v Dlugosz [2013] EWCA Crim 2 (CA
Crim Div)
Folkess,
Bart v Chadd and others (1782)3
Douglas 157, 99 E.R. 589
United
States v Frye 293 F 1013
(DC Cir., 1923)
R
v G [2004] EWCA Crim 1240
R
v Gardner [2004] EWCA Crim 1639
General
Electric Co. v Joiner 522 U.S.
136 (1997)
R
v Gilfoyle (Norman Edward) (Appeal against conviction) [2001] 2 Cr. App. R 5 (CA (Crm. Div.))
R
v Harris [2005] EWCA Crim 1990
R
v Harris (Lorraine) [2005] EWCA
Crim 1980 at 270
HM
Advocate v McKie 1999
R
v Hodges [2003] EWCA Crim 290
R
v Hoey [2007] NICC 49
R
v Hookway [1999] Crim LR 750
R
v Ibrahima [2005] EWCA
Crim 1436
Idoport
Pty Ltd v National Australian Bank Ltd [1999] NSWSC 828
R
v J-L.J [2000] 2 SCR 600
R v Kempster [2008] EWCA Crim 975
Kumho
Tire Co. v Carmichael 526 U.S.
137 (1999)
State
v Kunze 97 Wash App 832, 988 P2d
977 (1999)
R
v Luttrell [2004] 2 Cr
App R 31
R
v Mohan [1994] 2 SCR 9
R
v Parenzee [2007] SASC
143
R
v Parker [1912] V.L.R 152
R
v Reed and Reed [2009] EWCA
Crim 2698, [2010] 1 Cr App R 23
R
v Robb [1991] 93 Cr. App. R 161
R v Sally Clark
(No 2) [2003] EWCA Crim 1020
R
v Silverlock [1894] 2 Q.B.
766
R
v Smith [2011] 2 Cr App R 16
R
v Stubbs [2006] EWCA Crim 2312
R
v T [2010] EWCA Crim 2439, [2011] 1 Cr. App. R. 9
R
v Tang [2006] NSWCCA 167
R
v Trochym [2007] 1 SCR 239
R
v Turner [1975] QB 834
Legislations
Criminal
Procedure Rules 2005
Criminal
Procedure Rules 2012
Criminal
Procedure Rule 33
Criminal
procedure Rule 33.7
Evidence Act 1995
(Australia)
Evidence Act 1995
section 70 and 79
United States
Federal Rules of Evidence 1975
United States
Federal Rules of Evidence 2000
United States
Federal Rule of Evidence 702
United States
Federal Rule of Evidence 706
Books
Goodstein D, ‘How science works in Reference Manual on
Scientific Evidence’ 67, 80-82 (Federal Judicial Centre Ed. 2nd
ed. 2000)
Articles
Alldridge P, ‘Scientific expertise and comparative
criminal procedure’ (1999) 3 3 Int’l J. Evidence & Proof 141
Armstrong J S,
‘Discovery and communication of important marketing findings: evidence and
proposals’ (2003) Wharton School, University of Pennsylvania: PA, oct 31 pp at
<repository.upen.edu/cgi/viewcontent.cgi?article=1021>
Bentley D and Lownds P, ‘Low template DNA’ (2011) 1
Archbold Review 5
Bernstein D, Junk science in the United States and
the Commonwealth’ (1996) 21 Yale J. Int’l L. 123
Boaz A and Ashby D, ‘Fit for purpose? Assessing
research quality for evidence based policy and practice’ (2003) ECRC UK Centre
for Evidence Based Policy and Practice: Working paper11 at http://www.kcl.ac.uk/sspp/departments/politicaleconomy/research/cep/pubs/papers/assets/wp11.pdf
Caddy B, Taylor G and Linacre A, ‘A Review of the
Science of Low Template DNA Analysis’ (2008) at http://www.bioforensics.com/articles/Caddy_Report.pdf
Caleb J, ‘Beyond People
v Castro: A new standard of admissibility for DNA fingerprinting’ (1991) 7
J. Contemp. Health & Pol’y 269
Cheng E, and Yoon A, ‘Does Frye or Daubert matter? A
study on scientific admissibility standards’ (2005) 91 Va. L. Rev. 471
Cole S, ‘Fingerprinting: The first junk
science?’(2003) 28 Okla. City U. L. Rev.73
Cole S, ‘Comment on ‘scientific validation of
fingerprint evidence under Daubert’
(2007) 7 Law, Probability and Risk 119
Cole
S and Roberts A, ‘Certainty, individualisation and subjective nature of expert
fingerprinting evidence’ (2012) 11 Crim. L. R. 824
Committee on DNA Tech. In Forensic Science, Nat’l
Res. Council, DNA Technology in Forensic Science vii (1992)
Deitch A, ‘An inconvenient tooth: Forensic
odontology is an inadmissible junk science when it is used to “match” teeth to
bite-marks in skin’ (2009) Wis. L. Rev. 1205
Dror I and
Hampikian G, ‘Subjectivity and bias in forensic DNA mixture interpretation’
(2011) 51 Science and Justice 2004
Edmond
G, ‘Constructing miscarriages of justice: Misunderstanding scientific evidence
in high profile criminal appeals’ (2002)22 Oxford J. Legal stud. 53
Edmond G, ‘Pathological science? Demonstrable
reliability and expert forensic pathological evidence’ at www.attorneygeneral.jus.gov.on.ca
accessed on 10 march 2013
Edmond G, ‘Specialised knowledge, the exclusionary
discretion and reliability: Reassessing incriminating expert opinion evidence’
(2008) 31 NSWLJ 1
Edmond G,
‘Judging the scientific and medical literature: Some legal implication of
changes to biomedical research and publication’ (2008) 28(3) Oxford J. Legal Studies
523
Edmond G, ‘Constructing miscarriages of justice:
Misunderstanding scientific evidence in high profile criminal appeals’ (2002)22
Oxford journal of legal studies 53
Edmond G, ‘Advice for the courts? Sufficiently
reliable assistance with forensic science and medicine (Part 2), (2012) 16
Int’l J. Evidence & Proof 263
Focus:
‘What are the standards for quality research’ (2005) NCDRD Technical Brief
number 9 at www.acddr.org/kt/products/focus/focus9/
Gatowski S et al., ‘Asking the gatekeepers: A
national survey of judges on judging expert evidence in a post-Daubert world’ (2001) 25(5) Law and
Human Behaviour 433
Giannelli P and Imwinkelried, ‘Scientific evidence:
the fallout from the Supreme Court’s decision in Kumho Tire’ (2000)14 Crim. Just. 12
Giannelli P,
‘Forensic science, Frye, Daubert and
the Federal rules’ (1993) 29 Criminal Law Bulletin 428 at 432
Giannelli P, ‘Forensic symposium: The use and misuse
of forensic evidence – Admissibility of scientific evidence’ (2003) 28 Okla.
City U. L. Rev. 1
Giannelli P, ‘Bite-mark evidence’ (2007-2008) 22
Crim. Just. 42
Gould J and Leo R, ‘One hundred years later:
Wrongful convictions after a century of research’ (2010) 100 (3) The Journal of
Criminal Law & Criminology 825
Grayson L,
‘Evidence based policy and quality of evidence: Rethinking peer review’ (2002)
ESRC UK Centre for Evidence Based Policy and Practice (working Paper 7) at www.kcl.ac.uk.
Gross S, ‘Expert Evidence’ (1991) Wis. L. Rev. 1113
Haber L and Haber R, ‘Scientific validation of
fingerprinting evidence under Daubert’ (2008) 7 Law, Probability and Risk 87
Halpin S, ‘What
have we got ear then? Developments in forensic science: Ear Prints as
identification evidence at criminal trials’ (2008) 8 U. C. Dublin L. Rev. 65
Hon. Hammond G, ‘The new miscarriages of justice’
(2006) 14 Waikato L. Rev. 1
Howard M.N., ‘The neutral expert: a plausible threat
to justice’ (1991) Feb Crim. L. R. 98
Jamieson A,
‘Case note - LCN DNA analysis and opinion on transfer: R v Reed and Reed’ (2011 Int’l J. Evidence & Proof 161
Jobling M and Gill P, “Encoded evidence: DNA in
forensic analysis’ (2004) 5 Nature Reviews: Genetics 739
Justice
McClellan P, ‘Admissibility of expert evidence under the Uniform Evidence Act’,
Judicial College of Victoria, Emerging issues in expert evidence workshop,
Melbourne, 2009
Keiser-Nielsen S, ‘Forensic odontology’ (1969) 1 U. Toledo
L. Rev. 633
Kohar R,’FDCC
Quarterly/Spring 2007
Koppl R, ‘How to improve
forensic science’ (2005) E.J.L. & E. 256
Kovera M et al, ‘Assessment of the common sense
psychology underlying Daubert’ (2002)
8 Psychology Public Policy and Law 180
Lustre A, ‘Annotations, Post-Daubert standards for
admissibility of scientific and other evidence in state courts’ (2001)90 A.L.R.
453
Lynch M, ‘God’s signature: DNA profiling, the new
gold standard in forensic science’ (2003) 27 Endeavour 93
Majamaa H and Ytti A, ‘Survey of the conclusion
drawn of similar footwear cases in various crime laboratories’ (1996) 82
Forensic Science International 109
Mason M, ‘The scientific evidence problem: A
philosophical approach’ (2001) 33 Ariz .St. L.J. 887
Meijer L, Thean
A and Maat G, ‘Ear prints in forensic investigations’ (2005) 1:4 Forensic Sci.
Med. Pathol. 247
Mnookin J, ‘The validity of fingerprinting
identification: Confessions of a moderate’ (2008) 7 Law, Probability and Risk,
127
Note, ‘Admitting doubt: A
new standard for scientific evidence’ (2009-2010) 123 Harv. L. Rev. 2021
O’Brian
W, ‘Court scrutiny of expert evidence: recent decisions highlight the tensions’
(2003) 7 (3) International Journal of Evidence & Proof 172
Odger S and
Richardson J, ‘Keeping bad science out of the courtroom’ (1995) 18 U.N.S.W.L.J
108
Pretty
I and Sweet D, ‘A paradigm shift in the analysis of bite-marks’ (2010) 201
Forensic Science International 38
Pretty A and Sweet D, ‘The judicial view of
bitemarks within the United States criminal justice system’ (2006) 24 J.
Forensic Odonto-Stomatology 1
Reisinger K, ‘Court- appointed expert panels: A
comparison of two models’ (1998) 32 Ind. L. Rew. 225
Rennison A and
Pugh G, ‘Developing a quality standard for fingerprinting examination’ at www.homeoffice.gov.uk/agencies-public-bodies/fsr/
Riffe B. ‘Comment, The aftermath of Melendez: Highlighting the need for
accreditation-based rules of admissibility for forensic evidence’ (2010) 27 T.
M. Cooley L. Rev. 165
Roberts A, ‘Drawing on
expertise: legal decision- making and the reception of expert evidence’ (2008)
6 Crim. L.R 443
Roberts
P, ‘Forensic evidence after Runciman’ (1994) Crim. L. R. 780
Robins
J, ‘The fall-out of closing the Forensic Science Service’ (2012) 176 Criminal
Law & Justice Weekly 107
Rodrigues P, ‘Towards a new standard for admission
of expert evidence in Illinois: A critique of the Frye general acceptance test
and an argument for adoption of Daubert’
(2009-2010) 34 Ill. U. L. J. 289
Saks M, ‘The Phantom of the Courthouse’ (1995) 35
Jurimetrics 233
Saks M et al, ‘Model prevention and remedy of
erroneous convictions Act’ (2001) 33 Ariz. St. L. J. 669
Smith R, ‘Peer review: Reform or revolution?’(1997)
315 British Medical Journal 759
Spencer J. R., ‘The neutral expert – an implausible
bogey’ (1991) Feb Crim. L. R. 106
Thompson W, ‘Evaluating the admissibility of new
genetic identification tests: Lessons from the “DNA wars”’ (1993/94) 84 J.
Crim. L. & Criminology 22
Weir
B, ‘The second National Research Council report on forensic DNA evidence’
(1996)59 Am. L. Hum. Genet. 500
Wilson A,’ Expert opinion evidence: the middle way’
(2009) 73 (5) J. Crim. L. 430
Wilson A, ‘Expert testimony
in the dock’ (2005) 69 Journal of Criminal Law 330
Young R and Sanders A, ‘The Royal commission on
criminal justice: A confidence trick?’(1994) 14 Oxford J. Legal Stud. 435
Other Sources
Guidance- Interpretation of DNA evidence www.gov.uk/goverment/publication/the
interpretation-of-dna-evidence
House
of Commons Science and Technology Committee; The Forensic Science Service-
Seventh Report of session 2010-2012 at http://www.publications.parliament.uk/pa/cm201012/cmselect/cmsctech/855/855.pdf,
accessed on 24/6/2013
Justice May, Second Report: Return to an address of
the Honourable the House of Commons dated 3rd December 1992 for a
report of the Inquiry into the circumstances surrounding the convictions arising
out of the bomb attacks in Guilford and Woolwich in 1974 (1992).
Justice May, Final Report: Return to an address of
the Honourable the House of Commons dated 3rd December 1992 for a
report of the Inquiry into the circumstances surrounding the convictions
arising out of the bomb attacks in Guilford and Woolwich in 1974 (1994)
Law Commission, ‘Expert Evidence in Criminal
proceeding in England and Wales’ (Law Com no. 325), march 2011
National Research Council, ‘Strengthening forensic
science in the United States: A part forward’ at http://ag.ca.gov/meetings/tf/pdf/2009_NAS_report.pdf
[1] Richard Young
and Andrew Sanders ‘The Royal Commission on Criminal Justice: A confidence
trick?’(1994) 14 Oxford J. Legal Stud. 435.
[2]Royal Commission on Criminal Justice (Runciman Commission),
1991-1993
http://discovery.nationalarchives.gov.uk/SearchUI/Details?uri=C3042 and Justice May,
Final Report: Return to an address of the Honourable the House of Commons dated
3rd December 1992 for a report of the Inquiry into the circumstances
surrounding the convictions arising out of the bomb attacks in Guilford and
Woolwich in 1974 (1994).
[3] Paul Roberts,
‘Forensic evidence after Runciman’ (1994) Crim. L. R. 780.
[4] Gary Edmond,
‘Constructing miscarriages of justice: Misunderstanding scientific evidence in
high profile criminal appeals’ (2002)22 Oxford J. Legal stud. 53.
[5] House of Commons
Science and Technology Committee, Forensic Science on Trial, Seventh Report of
Session 2004-5, HC 96-1.
[6] Law Commission
Consultation Paper No 190 (2009) on admissibility of expert evidence in
criminal proceedings in England and Wales at http://lawcommission.justice.gov.uk/docs/cp190_Expert_Evidence_Consultation.pdf .
[7] Law Commission,
Expert Evidence in Criminal proceeding in England and Wales (Law Com no. 325),
march 2011 at http://lawcommission.justice.gov.uk/docs/lc325_Expert_Evidence_Report.pdf.
[8] Gary Edmond’ ‘Is reliability sufficient? The Law Commission and expert evidence in
international and interdisciplinary perspective: Part 1’ (2012) 16 (1) Int’l
J. Evidence & Proof 30 at 42 and 47.
[9] Folkess, Bart v Chadd and others (1782)3 Douglas 157, 99 E.R. 589.
[14] See, William
O’Brian, ‘Court scrutiny of expert evidence: recent decisions highlight the
tensions’ (2003) 7 (3) International Journal of Evidence & Proof 172 and
Adam Wilson,’ Expert opinion evidence: the middle way’ (2009) 73 (5) J. Crim.
L. 430.
[15] R v Atkins (Dean) [2009] EWCA Crim 1876;
R v Clarke [1995] 2 Cr App R 425; R v Hookway [1999] Crim LR 750; R v Briddick [2001] EWCA Crim 984; Attorney-General’s Reference (No. 2 of
2002) [2003] 1 Cr App R 21 and R v
Gardner [2004] EWCA Crim 1639.
[16] R v Harris (Lorraine) [2005] EWCA Crim
1980 at 270.
[17] Andrew Roberts, ‘Drawing on expertise: legal decision-
making and the reception of expert evidence’ (2008) 6 Crim. L.R 443.
[18] R v Gilfoyle (Norman Edward)
(Appeal against conviction) [2001] 2
Cr. App. R 5 (CA (Crm. Div.)).
[22] [2004] 2 Cr App
R 31.
[23] [2005] EWCA Crim
3559.
[24] (1984) 38 SASR
45(Australia).
[25] R v Luttrell at 32.
[26] R v Ciantar at 21.
[27] R v Harris [2005] EWCA Crim 1990; R v Clarke [1995] 2 Cr App R 425; R v Hodges [2003] EWCA Crim 290; R v Ibrahima [2005] EWCA Crim 1436; R v Stubbs [2006] EWCA Crim 2312; R v Luttrell [2004] EWCA Crim 1344; R v G [2004] EWCA Crim 1240.
[29] Australian Law Reform Commission, Review of the Uniform Evidence
Acts, Discussion Paper 69, 2005 at www.alrc.gov.au .
[30] R v Dallagher [2003] 1 Cr. App. R. 12.
[33] [2009] EWCA Crim
2698, [2010] 1 Cr App R 23.
[34] Ibid at 111.
[35] See R v George [2002] EWCA Crim 1923.
[36] See R v Sally Clark (No 2) [2003] EWCA Crim 1020 and R v Cannings [2004] EWCA Crim 1; WLR 2607(CA (Crim Div) ).
[37] National
Research Council, ‘Strengthening forensic science in the United States: A part
forward’ at http://ag.ca.gov/meetings/tf/pdf/2009_NAS_report.pdf.
[38] 293 F 1013 D.C.
Cir. 1923.
[39] Ibid at 1014.
[40] Paul Rodrigues,
‘Towards a new standard for admission of expert evidence in Illinois: A
critique of the Frye general
acceptance test and an argument for adoption of Daubert’ (2009-2010) 34 Ill. U. L. J. 289.
[42] Stephen Odger and James Richardson, ‘Keeping bad
science out of the courtroom’ (1995) 18 U.N.S.W.L.J 108.
[43] P Giannelli, ‘Forensic science, Frye, Daubert and the Federal rules’ (1993) 29 Criminal Law
Bulletin 428 at 432.
[44] Federal Rules of
Evidence 702 at http://www.uscourts.gov/uscourts/RulesAndPolicies/rules/2010%20Rules/Evidence.pdf.
[50] Ibid at 601.
[51] Sophia Gatowski
et al., ‘Asking the gatekeepers: A national survey of judges on judging expert
evidence in a post-Daubert world’
(2001) 25(5) Law and Human Behaviour 433 and Margaret Kovera et al, ‘Assessment
of the common sense psychology underlying Daubert’
(2002) 8 Psychology Public Policy and Law 180.
[52] Alice Lustre,
‘Annotations, Post-Daubert standards for admissibility of scientific and other
evidence in state courts’ (2001)90 A.L.R. 453.
[53] Edward K Cheng,
and Albert H Yoon, ‘Does Frye or Daubert matter? A study on scientific
admissibility standards’ (2005) 91 Va. L. Rev. 471.
[58] Note, ‘Admitting doubt: A new standard for scientific
evidence’ (2009-2010) 123 Harv. L. Rev. 2021.
Also see Section 3 below under ‘Reliability of scientific evidence’.
[69] [1953] S.C. 34.
[70] Ibid at 40.
[71] [2007] 1 SCR
239.
[72] Ibid at 1.
[73] Ibid at 27.
[74] Ibid at 36.
[75] Gary Edmond,
‘Pathological science? Demonstrable reliability and expert forensic
pathological evidence’ at www.attorneygeneral.jus.gov.on.ca accessed on 10
march 2013.
[76] Ibid.
[77] David Bernstein,
Junk science in the United States and the Commonwealth’ (1996) 21 Yale J. Int’l
L. 123.
[78] Stephen Odgers
and James Richardson, ‘Keeping bad science out of the courtroom- changes in
American and Australian expert evidence law’(1995) 18 U.N.S.W.L.J 108.
[79] Ibid.
[80] [1912] V.L.R
152.
[82] Austl. Law Reform Commission Interim Report No. 26. At
www.austlii.edu.au accessed on 11 march 2013.
[85] (1960) 103 CLR
486.
[86] (1984) 38 SASR
45.
[87] Justice Peter
McClellan, ‘Admissibility of expert evidence under the Uniform Evidence Act’,
Judicial College of Victoria, Emerging issues in expert evidence workshop,
Melbourne, 2009.
[88] Ibid.
[89] [1999] NSWSC
828.
[90] [2006] NSWCCA
167.
[91] Above at n 87.
[92] Gary Edmond,
‘Specialised knowledge, the exclusionary discretion and reliability:
Reassessing incriminating expert opinion evidence’ (2008) 31 NSWLJ 1.
[96] The Law
Commission (Law Com No 325), Expert evidence in criminal proceedings in England
and Wales, (21 March 2011) at http://lawcommission.justice.gov.uk/docs/lc325_Expert_Evidence_Report.pdf.
[97] Ibid 5.17.
[98] Ibid at Page 32, 3.62.
[99] Ibid at page 65, 5.35.
[101] National
Research Council, ‘Strengthening forensic science in the United States: A part
forward’ at http://ag.ca.gov/meetings/tf/pdf/2009_NAS_report.pdf.
[102] Michael Lynch,
‘God’s signature: DNA profiling, the new gold standard in forensic science’
(2003) 27 Endeavour 93.
[103] Ibid.
[104] `Mark Jobling
and Peter Gill, “Encoded evidence: DNA in forensic analysis’ (2004) 5 Nature
Reviews: Genetics 739.
[105] Ibid.
[107] Ibid
[108] Ibid.
[109] Ibid.
[110] William
Thompson, ‘Evaluating the admissibility of new genetic identification tests:
Lessons from the “DNA wars”’ (1993/94) 84 J. Crim. L. & Criminology 22.
[111] Above at n101.
[112] Ibid at 149.
[113] Ibid at 16,98,108-109.
[114] Ibid at 88-89.
[115] Above at n110.
[116] Ibid.
[117] Ibid.
[118] Ibid.
[119] Ibid.
[120] Ibid.
[121] Ibid.
[122] Committee on DNA
Tech. In Forensic Science, Nat’l Res. Council, DNA Technology in Forensic
Science vii (1992).
[123] Ibid at 83.
[124] Bruce Weir, ‘The
second National Research Council report on forensic DNA evidence’ (1996)59 Am.
L. Hum. Genet. 500.
[125] 144 Misc. 2d
956, 545 N.Y.S 2d, 985 (Sup. Ct. 1989).
[126] John Caleb,
‘Beyond People v Castro: A new standard of admissibility for DNA fingerprinting’
(1991) 7 J. Contemp. Health & Pol’y 269.
[127] Ibid.
[128] Guidance-
Interpretation of DNA evidence www.gov.uk/goverment/publication/the
interpretation-of-dna-evidence.
[129] Ibid at 1.
[130] Brain Caddy,
Graham Taylor and Adrian Linacre, ‘A Review of the Science of Low Template DNA
Analysis’ (2008) at http://www.bioforensics.com/articles/Caddy_Report.pdf.
[131] Above at n128.
[132] [2007] NICC 49.
[133] Ibid at 46.
[134] Ibid at 46.
[135] Ibid at 62.
[136] Above at n130.
[137] [2009] EWCA Crim
2698, [2010] 1 Cr APP R 23.
[144] Alan Jamieson, ‘Case note - LCN DNA analysis and
opinion on transfer: R v Reed and Reed’ (2011)
Int’l J. Evidence & Proof 161.
[146] Itiel Dror and Greg Hampikian, ‘Subjectivity and bias in
forensic DNA mixture interpretation’ (2011) 51 Science and Justice 2004.
[147] Paul Giannelli,
‘Forensic symposium: The use and misuse of forensic evidence – Admissibility of
scientific evidence’ (2003) 28 Okla. City U. L. Rev. 1.
[148] Ibid.
[150] Above at n101,
page 144.
[151] Above at n149
page 48.
[152] Simon Cole,
‘Fingerprinting: The first junk science?’(2003) 28 Okla. City U. L. Rev.73.
[153] Lyn Haber and
Ralph Haber, ‘Scientific validation of fingerprinting evidence under Daubert’
(2008) 7 Law, Probability and Risk 87.
[154] Jennifer
Mnookin, ‘The validity of fingerprinting identification: Confessions of a
moderate’ (2008) 7 Law, Probability and Risk, 127.
[156] Above at n154.
[158] Ibid.
[159] Simon Cole,
‘Comment on ‘scientific validation of fingerprint evidence under Daubert’’.
(2007) 7 Law, Probability and Risk 119.
Jennifer Mnookin, ‘The validity of fingerprinting identification:
Confessions of a moderate’ (2008) 7 Law, Probability and Risk, 127.
[160] Simon Cole and
Andrew Roberts, ‘Certainty, individualisation and subjective nature of expert
fingerprinting evidence’ (2012) 11 Crim. L. R. 824.
[161] HM Advocate v McKie 1999.
[163] Ibid at 34.20.
[164] Ibid at 34.21.
[165] Above at n101,
page 12 and 13.
[166] R v Buckley [1999] 163 JP 561 at 7.
[168] Ibid at 61.
[169] Ibid at 61.
[170] Ibid at 63.
[171] Andrew Rennison and Gary Pugh, ‘Developing a quality
standard for fingerprinting examination’ at www.homeoffice.gov.uk/agencies-public-bodies/fsr/.
[174] Iain Pretty and
David Sweet, ‘A paradigm shift in the analysis of bite-marks’ (2010) 201
Forensic Science International 38.
[175] Adam Deitch, ‘An
inconvenient tooth: Forensic odontology is an inadmissible junk science when it
is used to “match” teeth to bite-marks in skin’ (2009) Wis. L. Rev. 1205.
[176] S
Keiser-Nielsen, ‘Forensic odontology’ (1969) 1 U. Toledo L. Rev. 633.
[177] Paul Giannelli,
‘Bite-mark evidence’ (2007-2008) 22 Crim. Just. 42.
[178] Above at n175.
[179] Above at n177.
[180] Above at n174.
[181] Above at n177.
[182] I.A. Pretty and
D.J. Sweet, ‘The judicial view of bitemarks within the United States criminal
justice system ’(2006) 24 J. Forensic odonto-Stomatology 1.
[184] Ibid.
[185] Ibid at page 148.
[186] Ibid.
[187] Heikki Majamaa
and Anja Ytti, ‘Survey of the conclusion drawn of similar footwear cases in
various crime laboratories’ (1996) 82 Forensic Science International 109.
[188] Ibid at 119.
[189] Above at n101
page 149.
[190] [2010] EWCA Crim
2439, [2011] 1 Cr. App. R. 9.
[191] Ibid at 86.
[192] Ibid at 76.
[193] Ibid at 73 and 96.
[196] Lynn Meijer, Andrew Thean and George Maat, ‘Ear prints
in forensic investigations’ (2005) 1:4 Forensic Sci. Med. Pathol. 247.
[198] Simon Halpin, ‘What have we got ear then? Developments
in forensic science: Ear Prints as identification evidence at criminal trials’
(2008) 8 U. C. Dublin L. Rev. 65.
[199] 97 Wash App 832,
988 P2d 977 (1999).
[200] Ibid at page 9.
[201] Ibid.
[202] [2002] EWCA Crim
1903, [2003] 1 Cr App R 12.
[203] Ibid at 199.
[204] Ibid at 201.
[205] Ibid at 200 and 202.
[206] Subsequent DNA
testing showed that Dallagher could not have been the person responsible for
the ear prints.
[207] [2008] EWCA Crim
975.
[211] Daubert v. Merrell Dow Pharmaceuticals,
Inc., 509 US 579 - Supreme Court 1993 at 593.
[212] Richard Smith,
‘Peer review: Reform or revolution?’(1997) 315 British Medical Journal 759.
[213] Annette Boaz and
Deborah Ashby, ‘Fit for purpose? Assessing research quality for evidence based
policy and practice’ (2003) ECRC UK Centre for Evidence Based Policy and
Practice: Working paper11 at http://www.kcl.ac.uk/sspp/departments/politicaleconomy/research/cep/pubs/papers/assets/wp11.pdf.
[214] Lesley Grayson, ‘Evidence based policy and quality of
evidence: Rethinking peer review’ (2002) ESRC UK Centre for Evidence Based
Policy and Practice (working Paper 7) at www.kcl.ac.uk.
[217] Ibid; Pearce published an article in the British Journal of Obstetrics and
Gynecology in 1994 in which he claimed to have successfully transplanted an
embryo from the fallopian tube to the uterus leading to delivery of a baby. The
article was coauthored by Pearce’s head of department who was also the
president of the Royal College of Obstetrics and Gynecology. Later the fraud
was detected.
[218] Armstrong J S, ‘Discovery and communication of
important marketing findings: evidence and proposals’ (2003) Wharton School,
University of Pennsylvania: PA, oct 31 pp at
<repository.upen.edu/cgi/viewcontent.cgi?article=1021> accessed on 10th
June 2013.
[219] Gary Edmond, ‘Judging the scientific and medical
literature: Some legal implication of changes to biomedical research and
publication’ (2008) 28(3) Oxford J. Legal Studies 523.
[221] Focus: ‘what are
the standards for quality research’ (2005) NCDRD Technical Brief number 9 at www.acddr.org/kt/products/focus/focus9/. Accessed on 15th
March 2013.
[223] Ibid.
[224] Hon. Grant
Hammond, ‘The new miscarriages of justice’ (2006) 14 Waikato L. Rev. 1.
[225] Jon b. Gould and
Richard A. Leo, ‘One hundred years later: Wrongful convictions after a century
of research’ (2010) 100 (3) The Journal of Criminal Law & Criminology 825.
[226] Michael Saks et
al, ‘Model prevention and remedy of erroneous convictions Act’ (2001) 33 Ariz.
St. L. J. 669.
[228] Justice May,
Second Report: Return to an address of the Honourable the House of Commons
dated 3rd December 1992 for a report of the Inquiry into the
circumstances surrounding the convictions arising out of the bomb attacks in
Guilford and Woolwich in 1974 (1992).
[229] Ibid at 8.1 and 8.9.
[230] Ibid at 9.5.
[231] Ibid at 10.11.
[232] Ibid at 11.16.
[234] Above at n228 at
11.14.
[235] Ibid at 14.2.
[236] Ibid at 14.6.
[237] Gary Edmond,
‘Constructing miscarriages of justice: Misunderstanding scientific evidence in
high profile criminal appeals’(2002)22 Oxford journal of legal studies 53.
[238] Justice May,
Final Report: Return to an address of the Honourable the House of Commons dated
3rd December 1992 for a report of the Inquiry into the circumstances
surrounding the convictions arising out of the bomb attacks in Guilford and
Woolwich in 1974 (1994).
[240] Ibid at 93.
[241] Ibid at 169.
[249] Ibid at 133.
[250] Ibid at 137.
[251] Ibid at 137.
[252] Ibid at 141 and 145.
[253] Ibid at 147.
[254] Ibid 160.
[255] Ibid 178.
[256] Ibid 179.
[261] David Goodstein, ‘How science works in Reference
Manual on Scientific Evidence 67, 80-82 (Federal Judicial Centre Ed. 2nd
ed. (2000).
[262] Michael C.
Mason, ‘The scientific evidence problem: A philosophical approach’ (2001) 33
Ariz .St. L.J. 887.
[263] Above at n226
and above at 225.
[273] P Giannelli and
E Imwinkelried, ‘Scientific evidence: the fallout from the Supreme Court’s
decision in Kumho Tire’ (2000)14
Crim.Just.12.
[274] House of Commons
Science and Technology Committee; The Forensic Science Service- Seventh Report
of session 2010-2012 at http://www.publications.parliament.uk/pa/cm201012/cmselect/cmsctech/855/855.pdf, accessed on
24/6/2013. The FSS was closed down on31st March 2012.
[275] Ibid.
[276] Ibid.
[277] Ibid
[279] Proposed bill
‘The Forensic Science and Standards Act of 2012, S. 3378’ http://thehill.com/images/stories/blogs/flooraction/jan2012/s3378.pdf accessed on 25
march 2013.
[280] Jon Robins, ‘The
fall-out of closing the Forensic Science Service’ (2012) 176 Criminal Law &
Justice Weekly 107.
[281]Law Commission,
Expert Evidence in Criminal proceeding in England and Wales (Law Com no. 325),
March 2011 at 6.41; 6.78 and 6.79.
[282] Ibid. (Criminal Procedure Rules 2012,
Part 33 already provide Judges with this power).
[283] Ibid at 6.45- 6.46.
[284] Samuel Gross,
‘Expert Evidence’ (1991) Wis. L. Rev. 1113.
[285] Karen Reisinger,
‘Court- appointed expert panels: A comparison of two models’ (1998) 32 Ind. L.
Rew. 225.
[286] Michael Saks,
‘The Phantom of the Courthouse’ (1995) 35 Jurimetrics 233.
[287] Ibid. In the US most often, such expert
witnesses when used were for civil cases rather than for criminal cases.
[288] Above at n284.
[290] M.N. Howard,
‘The neutral expert: a plausible threat to justice’ (1991) Feb Crim. L. R. 98.
[291] J.R.Spencer,
‘The neutral expert – an implausible bogey’ (1991) Feb Crim. L. R. 106.
[292] Ibid.
[293] Peter Allridge,
‘Scientific expertise and comparative criminal procedure’ (1999) 3 3 Int’l J.
Evidence & Proof 141.
[294] Gary Edmond,
‘Advice for the courts? Sufficiently reliable assistance with forensic science
and medicine (Part 2), (2012) 16 Int’l J. Evidence & Proof 263.
[295] Ibid
[296] Ibid
[297] Ibid
[298] Above at n293.
[299] Ibid.
[300] Beth Riffe,
‘Comment, The aftermath of Melendez: Highlighting the need for
accreditation-based rules of admissibility for forensic evidence’ (2010) 27 T.
M. Cooley L. Rev. 165.