David N Khey & Ian Tebbett. 21st Century Criminology: A Reference Handbook. Editor: J Mitchell Miller. 2009. Sage Publication.
For many Americans, the word forensics evokes a cascade of vibrant imagery that entails crime and intrigue. It is a buzzword for DNA, bite marks, bullet wounds, fingerprints, autopsy, gore, death investigations, semen stains, and rape kits. This, however, is only a small part of a much larger picture. Forensics itself is extremely broad—it is the application of the scientific method to assist the law. This can mean almost anything—accountants who perform analysis to assist the courts are forensic accountants; computer enthusiasts who hack into the hard drives of sexual predators are forensic computer technicians; physical anthropologists who study bones in a legal investigation are forensic anthropologists. The field of forensics is growing, and the list becomes even longer as more divisions of labor and specialization occur. With this large influx of experts in fields that expand with technology and multitudes of new techniques, it is amazing that the courts can even keep up.
The many different disciplines that make up forensic science have been embedded in popular culture since their inception. Long before criminal investigations incorporated the use of fingerprints, document examination, blood spatter pattern analysis, gunshot trajectories, accident reconstruction, and the like, these were the topics of fiction. Most familiar to many, Sherlock Holmes and his partner, Dr. Watson, applied the scientific method and stellar detective work to solve crimes and thereby introduced these concepts to the masses. Broadly speaking, this is the definition of forensic science: applying science and technology to legal investigations, whether civil or criminal. From the “medicolegal” examination of a human body postmortem (after death) to analyzing the breath of a driver who had a few too many drinks, the reconstruction of how the Twin Towers collapsed, and the identification of unknown soldiers and civilians in battlefields in mass graves, these practices have now long been integrated into Westernized contemporary court and justice systems. Yet, it was only in recent decades that the abilities of forensic scientists have vastly expanded due to a renaissance of scientific breakthroughs. The purpose of this chapter is to give an overview of the primary areas of forensic science and to review the breakthroughs and controversies within each of its disciplines. Secondarily, this chapter provides an introduction into how the courts screen expert witnesses and concludes with a summary of important recent developments in forensic science.
Primary Areas of Forensic Science
Many of the foundations of forensic science are rooted in keen criminal investigative principles adjoined with analysis using the scientific method. The work of Edmond Locard is a case in point. In the early 1900s, Locard developed a simple investigative principle that has stood the test of time and is very much incorporated in today’s detective work. Basically, Locard realized that as individuals interact with others or come in contact with objects in an environment, a “cross-transfer” of microscopic and macroscopic elements will occur. A bit of a dog owner’s hair will remain on the person’s clothes and may be left at a crime scene along with some skin cells, and perhaps some of the carpet fibers at the crime scene will cling to the cuff on an individual’s pants; either way, evidence of this transfer may serve as a substantial piece of circumstantial evidence in a case. This principle, called “Locard’s exchange principle,” is at the heart of trace evidence and criminal investigations. Much of the forensic investigation performed at a crime scene, such as utilizing the exchange principle to collect and subsequently analyze evidence, is in the area of criminalistics.
The majority of the forensic services provided in a robust crime laboratory are of a discipline called criminalistics. Put simply, this area of forensic science seeks to process physical evidence collected from a crime scene and produce a final report based on analysts’ findings. It is also the broadest category of forensic science, with many subspecialty units and much expertise. The easiest way to classify these services is to divide them by the units typically found in a robust crime lab: controlled substances, serology/biological screening, DNA, trace analysis, firearms/explosives, toolmarks, questioned documents, latent prints, and toxicology. While a vast amount of different analyses fall within these areas, these highlighted areas are nonexhaustive. It is also important to note that the widest amount of contemporary controversies revolve around many of the more subjective analyses performed by these analysts.
Over the last decade, the Bureau of Justice Statistics (BJS) commissioned a census of publicly funded crime labs to gain a better understanding of the collective trends in forensic services in the United States. By order of usage of service, controlled substances examination has persisted in being the most requested over the years of the census (Durose, 2008). Simply put, these requests concern seized substances thought to be illicit or unidentified controlled drugs. To perform the examination, analysts use a twopronged process to first screen substances and then use this preliminary data to run a confirmation analysis if the initial one screens positive for a controlled substance. This secondary analysis has the power to examine the unknown substance both qualitatively and quantitatively with high levels of statistical certainty. Thus, at the end of a controlled substance analysis, investigators will learn the consistency of the substances submitted for testing, down to their molecular makeup. For example, if an unknown white powder is examined, controlled substance analysis will show the different components that constitute that powder and to what extent these components make up the whole sample—perhaps 85% cocaine, 5% lidocaine, and 10% baking powder (sodium bicarbonate).
This differs slightly from toxicological services, as the analyses in this area serve to qualify and quantify controlled substances and their metabolites in biological matrices (e.g., blood, saliva, hair, urine, and vitreous fluid of the eye), as well as toxic substances (e.g., mercury, arsenic, and cyanide), alcohol, over-the-counter products, and many other foreign compounds to the body. Depending on the circumstances, investigators typically request only a certain subset of toxicological examinations to be performed, as a full “tox” screen is costly and wasteful. In particular, two situations call for a more comprehensive toxicological examination: in cases of offender/probationer/parolee drug screening and in post-mortem toxicology. In the BJS census, toxicological services were requested a far second (298,704 requests in 2002; 251,585 requests in 2005) behind controlled substances (844,183 requests in 2002; 855,817 requests in 2005). There is limited subjectivity in these areas of forensic science—thus, controversies are limited to individual cases.
Latent print analysis seems ubiquitous in forensic science and police investigations. Examining visible (patent) or invisible (latent) fingerprints and comparing them to known samples or a computer database called AFIS (Automated Fingerprint Identification System) is a long-standing tradition in the field, and is the third most common type of request for forensic services (Durose, 2008). Based on the premise that no two fingerprints are alike—even identical twins have fingerprints that differ—the criminal justice system as well as private security firms have invested heavily in a fingerprint-driven identification system. Using proficiency tests, or controlled examinations designed to gauge the accuracy and reliability of forensic analyses, researchers have proven to be very accurate in their identification, given typical casework circumstances. Thornton and Peterson (2002) find that the existence of misidentifications is a rare event in typical casework (fewer than 0.5% of comparisons); correct identification lies within the 98%–99% range under normal circumstances. Fingerprints are among only a few other analytical tests, such as DNA typing and blood typing, that share such high success rates.
Firearm and toolmark analysis, the next most requested service, is an example of a subset of forensic analysis that contains elevated amounts of subjectivity, which increases the likelihood of error. Shotgun shells and shot pellets, discharged bullets, bullet casings, and any sort of firearm and its ammunition can be examined to understand the origin of a spent bullet, the trajectory of shots fired, and much more. The physical construction of firearms and their mechanisms make relatively unique impressions on fired bullets suitable for these analyses. Forensic science examinations dealing with toolmarks work in a remarkably similar manner. Impressions left by screwdrivers, crowbars, knives, saws—any tool imaginable in a garage—can give investigators an idea of what tools were used in the commission of a crime. If these tools can be identified, additional evidence left on these objects may be collected, if found.
It is true that as time passes, unique wear and tear on these items may produce remarkably unique impressions on objects (e.g., bullets, walls, bone, etc)—especially when these items are frequently used. When this occurs, the ability of forensic firearm and toolmark examiners to make a determination of whether the suspect impression embedded in an object shares a “common origin” with a sample impression made by the firearm/tool in a laboratory increases in confidence. Regardless whether if these conditions are met or not, these forensic examiners have good success in making these determinations; however, their success can wane in comparison to objective analyses such as DNA testing and blood typing. It is important not to overweigh the probative value of these examinations, especially when environmental conditions such as decay or damage make these analyses exceedingly more difficult.
DNA analysis, what has become the gold standard in forensic identification, is the next most requested service in the United States (Durose, 2008). According to the BJS, this service has remained the most backlogged during the census of publicly funded crime labs in the country. This should not be surprising, as this type of forensic analysis is demanding on both human and operational resources. While the field has come a long way from the origins of the use of DNA in the criminal justice system over three decades ago, the average time to complete these requests is typically much longer than for any other forensic service. For example, a typical forensic toxicological analysis may take anywhere from a week to a month, but comparing DNA samples from a suspect or several suspects to biological samples gathered from a crime scene may take anywhere from a few months to a year. Many times, if backlogs become a surmounting problem and local or state funding permits, outsourcing to private labs may be an option. In fact, about 28% of the crime labs included in the BJS census have outsourced their DNA casework to private labs (Durose, 2008).
The value of DNA analysis is twofold: (1) Several kinds of DNA analyses are embraced by robust methodologies that include error rates that can be measured, calculated, and interpreted to yield results that are concrete and objective. These results can be interpreted to estimate the likelihood of both a false positive (e.g., the likelihood of finding a “match” when, in fact, the samples from a crime scene and a suspect do not “match”) and a false negative (e.g., the likelihood of not finding a “match” when, in fact, the samples from a crime scene and a suspect should “match”). (2) These types of requests also have the power to provide exculpatory and inculpatory evidence with the same amount of certainty, accuracy, and reliability. Both types of evidence are equally important in criminal justice, particularly when a person’s freedom is on the line: exculpatory evidence includes any proof of an individual’s innocence, while inculpatory evidence provides proof of guilt.
Even years after a crime occurs, DNA analysis has proven itself to be the chief piece of analysis in many criminal cases. The past few decades have seen wrongful convictions overturned by DNA analyses at the cost of proving other forensic science evidence (or at least the interpretation of this evidence) wrong. Saks and Koehler (2005) point out that forensic science testing errors and false or misleading testimony by forensic expert witnesses are the second and fifth most common issues (respectively) in the wrongful conviction cases overturned by Project Innocence. This organization consists of a group of attorneys and advisors working pro bono that have been highly critical of many components of the criminal justice system, including a variety of areas in forensic science. Since the late 1980s, over 225 convicted felons typically serving life sentences have been exonerated by the efforts of Project Innocence using DNA analysis as the cornerstone of their litigation. On their Web site and in their promotional literature, Project Innocence echoes Saks and Koehler’s calls for reform in forensic science, particularly within areas that only give limited probative value. This includes much of the remaining facets of criminalistics not previously discussed: serology and biological screening, trace evidence analysis (e.g., hairs, fibers, glass, paint, etc.), impressions (e.g., bite marks, shoeprints, tire marks, etc.), fire and explosive examination, and questioned documents.
Each of these areas of analysis has its strengths and weaknesses, but all of them have been shown to assist investigators in their casework. Serology and biological screening is an example of a subset of forensic services that allows any investigator to narrow down the possibilities of suspects or helps the investigator understand the circumstances and nature of the event(s) in question, yet it has limited probative value. While a variety of these forensic services are able to produce results with reliable statistics and defined error rates, critics remain steadfast that these results can be misleading to jurors. Blood grouping methods are a good example: These methods allow analysts to examine a sample of blood and produce a report that identifies the blood type of the “donor.” In stark contrast to the cost and effort of DNA analysis, these reports can be produced rapidly and at a low price. The issue, however, becomes the lack of power these analyses have in narrowing suspects with a good degree of certainty, as many people share the same blood type. “Presumptive tests” for suspected semen and saliva samples are examples of less powerful biological analyses that can yield useful results, giving investigators reasonable evidence that these samples do, in fact, consist of seminal fluid or saliva. If there is sufficient biological material and these samples are viable enough to run DNA analysis (e.g., the material has not been contaminated or degraded below qualifying levels), further analysis can be run to refine these preliminary results. Forensic analysts may also choose to use other methods, such as microscopy and species typing, to refine these results if DNA analysis is not an option.
Other kinds of forensic tools, such as particular types of trace analysis and questioned document analysis, do not have as good a track record of producing reliable, accurate, and powerful results. Observers, however, should not cast them off as not being useful. For example, if an analyst were to find a hair in the trunk of a car bound to a piece of duct tape that was consistent with a victim’s head hair, the car owner would have a lot of explaining to do. This is not to say that this hair couldn’t have come from another source—in fact, the analyst would be hard-pressed to come up with a statistic of the likelihood that the hair came from the victim’s head. If, in fact, the analyst offered this statistic, it would be a disservice to a jury, the defendant, and even the victim since this information is uncertain and not based on sound statistical principles. If DNA material—whether nuclear DNA material or a kind called mitochondrial DNA material—were available for examination, then analysts would gain the power to include specific statistics in the present case to aid in the interpretation of the findings. Otherwise, a certain degree of caution should be used in interpreting the results and weighed accordingly when making a decision based on the information found in a final report.
In particular, questioned document analysis has received a significant amount of criticism, particularly in its ability to determine “matched” writing samples (or more accurately stated, consistent writing samples). Proficiency testing has proven to yield weak results in this area (see Peterson & Markham, 1995b). Yet, handwriting comparisons are the most commonly requested service in the area of questioned documents. Based on the reasonable assumption that people’s handwriting evolves over time, and that writing habits contain idiosyncrasies, both conscious and subconscious, analysts look for consistencies in writing samples for particular classes and characteristics of writing behavior. This holds true even when a person tries to disguise his or her writing to conceal authorship. For the most part, these services are more critical in civil trials where the burden of proof does not have to meet the “beyond a reasonable doubt” standard. Other types of questioned document analysis can fortify these results to offer more resolute findings. These include the analyses and comparisons of paper, inks, and printer and typewriter output. It must be stated, however, that few of these analyses come with the ability to include standard statistics and error rates, leaving them open to the aforementioned criticism.
While the above is not an exhaustive list of forensic services performed by many crime labs, it should offer a sampling of analyses that make up a spectrum from objective to subjective. While those from the subjective end of the spectrum may not be able to conclusively yield the proverbial finger pointed at a wrongdoer or give black-and-white answers, they can further clarify what occurred or did not occur with a series of events under investigation. Obviously, very few pieces of evidence can offer a smoking gun, so to speak, on their own. It is not the sole responsibility of the forensic analyst to make this clear; it is the responsibility of all of the key players in the courtroom work group—judges, prosecutors, attorneys, jury foreman, and the jury—to use their role to get the most out of each analysis, report, and expert testimony to be able to reach a just verdict. While many of the critiques of the more subjective aspects of forensic science merit close attention, the importance should be stressed on the proper weighing of this evidence when offered at trial. As mentioned above, these analyses do hold scientific value but only to a limited extent. The results must be weighed carefully with all of the other evidence, testimony, and circumstances about a particular trial in question.
While the forensic services at a crime lab play an important role in contemporary criminal justice and civil courts, other key services are offered outside of the crime lab that are important to mention. Two areas in particular stand out—forensic pathology, since these services are utilized so regularly, and forensic anthropology, for its topical importance in solving identification mysteries worldwide. The following two sections describe these aspects of forensic science, often considered off in their own realms and separate due to where they are organizationally located, in the government (pathology, and a minor part of anthropology) and in academia (anthropology).
In the case of a sudden and unexpected death, an autopsy has become a mandatory public health and legal investigation to ensure that any disease threat—or more typically, wrongful death—does not go uninvestigated. A variety of organizational schemas exist to accomplish this in the United States. At the heart of these schemas are inherently two systems, the medical examiner system and the coroner system (Hanzlick & Combs, 1998). While it was previously important to speak of the differences between these two systems, these differences are narrowing as medically trained forensic pathologists are becoming the core of both. In earlier coroner systems, individuals of various backgrounds—undertakers, sheriffs, and farmers—served as the lead investigators in forensic death investigations. In the present day, this elected position still exists in rural areas; however, if there is a questioned death, most coroners have easy access to a district medical examiner or forensic pathologist with specialized training to thoroughly investigate a death. Famed pathologists DiMaio and DiMaio (2001) describe the duties of the death investigation system in their comprehensive overview of forensic pathology:
- To determine the cause and manner of death
- To identify, if the deceased is unknown
- To determine the time of death and injury
- To collect evidence from the body that can be used to prove or disprove an individual’s guilt or innocence and to confirm or deny the account of how the death occurred
- To document injuries or lack of them
- To deduce how the injuries occurred
- To document any natural disease present
- To determine or exclude other contributory or causative factors to the death
- To provide expert testimony if the case goes to trial (p. 1)
While this list is comprehensive, it ignores the most fundamental roles both medical examiners and coroners play in public health and epidemiology (Hanzlick & Parrish, 1996). For example, medicolegal investigation may uncover environmental hazards, poisons, or communicable diseases that have the potential to harm others. In this case, medical examiners or coroners can warn the appropriate authorities to take proper action to prevent harm. They also monitor trends in disease and drug overdose over time to fuel public health and drug abuse research.
Forensic pathology centers on the autopsy process. This process serves to answer two questions: What is the cause and the manner of death? The cause of death is the injury/condition or set of primary and secondary injuries/conditions that result in and contribute to the death in question. For example, myocardial infarction (heart attack), liver failure, asphyxia, alcohol poisoning/overdose, gunshot wound, blunt force trauma, and emphysema can be causes of death. The manner of death consists of only a few categories: natural, homicide, suicide, accident, and undetermined/unclassified. This determination takes the circumstances surrounding the death, including the activity of the decedent just before death, and blends it with the findings at autopsy, toxicology reports, medical history, and police narratives among other sources to categorize the individual into one of these pathways of death. This is the most subjective part of the autopsy proces, and is only finalized at the end of the forensic death investigation—typically a few days before the certificate of death is printed.
The controversies in this discipline are by and large localized to disagreements over the cause and manner of death in particular investigations—and since the determination of the cause of death can be documented and preserved for years past autopsy, these disagreements are quite limited. It is the manner of death that can be the most controversial, second only to outright malfeasance and malpractice. As this determination has bearing on life insurance policies, criminal and civil trials, and individuals’ reputations, challenges are relatively frequent in today’s society.
Sometimes death investigation, particularly human identification, requires the expertise of professionals who can interpret clues derived from the skeleton. Forensic anthropology, a specialization within physical anthropology, has particular import when the typical means of identification are destroyed, decomposed, or otherwise damaged. The determination of age, race/ancestry, sex, and living height/stature can be assessed by the advanced anthropometric methods available in the discipline to aid investigators by providing an antemortem (before death) profile of the unknown individual. These methods are based on the forensic skeletal collections of leading anthropologists around the world, particularly in the United States (Ousley & Jantz, 1998). The skeletons in these collections have been meticulously measured and documented, and have been programmed into specialized computer statistical packages that give forensic anthropologists the ability to estimate most individuals’ living profile with reasonable statistical confidence. As more contemporary skeletons are contributed to this data bank, and particularly as these collections become more diverse in their sampling, the statistical confidence of these practitioners will be enhanced.
Beyond this profile, the physical examination of the skeleton can reveal injuries, damage or wear by occupational stress, unique genetic variations, surgical modifications, and an estimate of time since those events that all can assist in identification. For example, someone who broke his or her forearm 2 months before death will show evidence of trauma and healing in the ulna or radius in that arm. The healing process comes to a stop once a person dies, so this evidence gets frozen in time, so to speak. Evidence of perimortem (around the moment of death) trauma to the skeleton may also be helpful to investigators in determining the circumstances of death. In fact, the timing of injuries can be imperative in determining wrongdoing in homicide cases (Sauer, 1998).
In contemporary times, forensic anthropologists have been key players in the investigation of mass disasters and mass graves. These individuals are highly trained in the gentle excavation and analysis of skeletal remains in many different environments. In fact, one of the leading research programs in the world—the Forensic Anthropology Center at the University of Tennessee—has made many contributions to scholarly literature on the impact of environmental and circumstantial factors on the human skeleton. This literature continues to aid forensic anthropologists in the field as they travel to distant corners of the globe in which different climates, soils, environmental factors such as acid rain and salt water, and so much more have differential impacts on skeletal remains over time.
Handling of Scientific Testimony by the Courts
With the increased use of forensic science testimony in the courts, there also must be safeguards against so-called junk science being admitted into trials. The manner in which the courts perform this task is being debated among many experts that offer their services to the court, litigants, plaintiffs, and defendants. In the days before any guidance was issued by the courts, justices used to rely on the “marketplace test” for expert witnesses. Basically, if the expert witness could sell his or her craft and survive in the marketplace, and if he or she offered testimony that was not common knowledge or within the grasp of the average juror, more than likely that testimony was admitted. Note that this did not screen out those who practice mumbojumbo science, and it could not distinguish between astrology (a very old tradition that still can make money today) and astronomy. Today, this strategy would not work. Psychic detectives would not be allowed to testify to their experiences in speaking with the dead—something that cannot be verified by any sort of empirical tests, which leaves the court and other experts skeptical. However, a homeopathic doctor who has credentials from a nonaccredited institution may be able to give testimony on the effects of the sage plant on insanity from his selfdocumented case studies. Thus, the courts must have some sort of method to be able to distinguish between what could be considered science and what can be considered bogus.
The first black-and-white method of screening expert testimony was offered in Frye v. United States (1923). In this case, the defendant was accused of murder, to which he offered an expert to testify to his innocence by analyzing the results of a very primitive lie detection exam (the systolic blood pressure deception test). This witness and subsequent testimony were rejected, since they did not yet receivegeneral acceptance in the field from which they came. This type of lie detection device was new on the scene, and the court made the stand that testimony given and evidence offered should have a real-world basis and be generally accepted among the experts in the field. This will prevent evidence “in the twilight zone” from prematurely influencing court decisions before it can be perfected within the expert’s field.
Frye’s general acceptance test survived until contemporary times, and was hardly mentioned until talk about updating the Federal Rules of Evidence began to stir up controversy. In 1993, this controversy came to a head in Daubert v. Merrell Dow Pharmaceuticals when the Court revised the judge’s role in admitting expert testimony. The decision was to make the judge act as the gatekeeper to screen out junk science and allow the expert testimony that is reliable, valid, testable (falsifiable), and generally accepted. The Court did not mandate that these criteria should be limiting nor inflexible, but it did stress that judges should utilize, to the best of their ability, their analytical skills in making a judgment call on the evidence or testimony’s methodology and standing in the field from which it came. Two more key decisions were made by the Court to enhance the role of trial judges as gatekeepers of expert testimony. In Joiner v. General Electric Co. (1997), the method of appealing lower courts’ decisions on allowing or disallowing expert testimony was set to abuse of discretion instead of a de novo review of the proffered expert testimony. This means that trial judges should be challenged on their decisions to accept or disallow expert testimony only if a plaintiff or defendant can prove that this judge broke a procedural rule in the process of coming to this decision. Complete overviews of this judicial decision were deemed inappropriate. The second was Kumho Tire Co. v. Carmichael (1999), which expanded the Daubert decision to include all expert witnesses, not just those with a scientific background (auto mechanics, accountants that have worked closely with the FBI on fraud cases, and many others without advanced degrees but who have specialized knowledge).
So, the courts are set with the precedent to keep out junk science, but can they actually perform the task well? The Court has spoken about the ability to utilize “special masters” to aid the court in coming to a conclusion on the veracity of offered expert testimony. Some scholars suggest that a research foundation be created, at least for the Supreme Court, similar to what the Congress currently has the capacity to do. This way, the parties can offer expert testimony, but the court can make a counteroffer with nonbiased (as much as this is possible) research that can guide the trial in the right direction. The issue of whether these safeguards, if instituted regularly, assist in keeping out junk science is an empirical question that desperately needs answering.
Today, decisions have been made at the state level to continue to follow a Frye-based system, or a Daubert based system, or a third system that is a hybrid of the two. The federal system works solely onDaubert principles. As can be anticipated, there are advocates of both Frye and Daubert-based systems—the differences between them are outside of the scope of this chapter. However, readers should take note that challenges to expert testimony are constantly being litigated. The decisions of these trials will serve to be the most important shaping factors in what will be deemed appropriate in U.S. courts.
Due to the wide variation of facets in the forensic sciences, the undertaking of sifting through all methods and techniques of all forensics is the stuff that makes up a complete book, if not a series of books. As outlined in this chapter, a variety of types of examinations performed in forensic laboratories cannot even be assessed with conventional statistics with the exceptions of DNA analysis and blood group typing (which has lost prevalence as DNA analysis gained popularity), and certain analyses of gunshot residue models (Faigman, Kaye, Saks, & Sanders., 2002). Therefore, it is important to cite the variability of the subjectivity and objectivity within these methods and techniques to gain some insight into the overall utility of these analyses as stand-alone pieces of information. David Faigman et al. begin taking on the task of typing many techniques used within the vast fields of the forensic sciences in terms of amount of subjectivity, reliability in the minds of forensic scientists, and their individual susceptibility of attack when measured by the criteria posited by Daubert. Since no aggregate data are available to seek relative frequency probabilities, it is up to the experience of the individual examiner to establish levels of confidence around his or her determinations. The complexities of placing these confidence intervals around scientific testing are apparent to those with even an elementary knowledge of statistics.
The future of forensic science has much to do with evolving with the standards the courts will set over the coming years. If more states were to move to the Daubert criteria for evaluating expert testimony, it is more likely that a portion of the more subjective-heavy analyses of forensic science would be decommissioned. While many would argue that this is a necessary and overdue development in forensic science, a good portion of these forensic services do offer value to investigators that may or may not be lost. For example, there is no reason that investigators or litigants should not continue to use these services to provide this value—it is just that the information found in the final reports of these investigations must be used only to help someone make a case and would not be allowed at trial. As previously suggested, these analyses can lead to further inquiries, which may break cases wide open, whether they are civil or criminal.
As a conglomerate of professions, the forensic sciences are actively overhauling their professional codes of ethics to address the rash of cases in which rogue forensic scientists were falsifying reports, doing bad science, and egregiously overstepping their bounds as expert witnesses. While frauds exist in every aspect of life, any person who harms the liberty of another person just for personal gain or lack of professionalism is surely the most despised from both within and outside of the professions. Even entire crime labs have been identified as corrupt. Accreditation that is monitored by professional organizations, and making these accreditation processes more robust, have been seen as ways to begin to root out such problems before they begin. This accreditation must be maintained throughout one’s professional career and through the duration of a lab’s existence.
On a final note, much investment has been made in professionalization and the encouragement of continuing education and training to assist forensic practitioners in expanding their knowledge base. This will assist these professionals in keeping up with the state of the art in the fast-paced world of science and technology, and their advancement. The most recent U.S. presidents have also made commitments to expand forensic science research and development, particularly in the DNA analysis and human identification areas. Such advances in technology will be key for many years to come in the U.S. criminal justice system’s capacity to solve crimes, seek justice, and learn truths about the many mysteries that will confront it.