J Worth Estes. Cambridge World History of Food. Editor: Kenneth F Kiple & Kriemhild Conee Ornelas. Volume 2. Cambridge, United Kingdom: Cambridge University Press, 2000.
The eminent medical historian Henry E. Sigerist once noted that “[t]here is no sharp borderline between food and drug,” and that both dietetic and pharmacological therapies were “born of instinct” (Sigerist 1951: 114-15).Today we tend to focus our studies of food on its nutritive values in promoting growth and health and in preventing disease, but for many centuries past, food had an additional, specifically medical role—as a remedy for illness.
The United States Food, Drug, and Cosmetic Act, signed into law June 27, 1938, provides no clearer differentiation between “food” and “drug” than Sigerist could. According to the current wording of that legislation, which updated the Pure Food and Drug Act of 1906,”the term ‘food’ means (1) articles used for food or drink for man or other animals, (2) chewing gum, and (3) articles used for components for any other such article,” whereas “the term ‘drug’ means (A) articles recognized in the official United States Pharmacopoeias [and several other compendia]; and (B) articles intended for use in the diagnosis, cure, mitigation, treatment, or prevention of disease in man or other animals; and (C) articles (other than food) intended to affect the structure or any function of the body of man or other animals; and (D) articles intended for use as a component of any articles specified in clause (A), (B), or (C).” Under clause (B) above, many items that have traditionally been considered foods might also be regarded as drugs under federal law, although they seldom are. The Food and Drug Administration (FDA) can intervene in cases involving food only when it judges an item to be misleadingly labeled as a “food”; it specifically excludes vitamins from the category “drugs.”
The Foundation for Innovation in Medicine recently coined the word “nutraceutical” to signify “any substance that can be considered a food and provides medical and health benefits, including the prevention and treatment of disease. Such products include traditional foods, isolated nutrients, dietary supplements, genetically engineered ‘designer’ foods, herbal products and processed foods” (Tufts Journal 1992: 10). However, the very concept of nutraceuticals seems to sidestep even the vague definitions with which the FDA is tethered. Indeed, it is probably not possible to differentiate foods from drugs with mutually exclusive definitions. The ambiguity, troublesome as it may be in some contexts, has deep historical roots. Until the early twentieth century, physicians routinely prescribed specific foods and diets for their medical—that is, curative or preventive—value. The reader should also keep in mind that nonprofessional healers, such as wives and mothers, have employed the same food remedies for similar purposes.
Rarely is it possible to pinpoint the putative healing roles of specific components of traditional foods, such as the oils that confer distinctive tastes on several botanical flavors and spices, or extracts with known pharmacological properties, such as the active principles of coffee, tea, and cocoa. Similarly, we are not concerned with toxins or biological contaminants of foods, such as the ergot alkaloids produced by a fungus that sometimes infects bread made with rye (Hofmann 1972) or the anticoagulant found in spoiled sweet-clover fodder for domestic cattle. Nor does this chapter deal with foodstuffs that only incidentally provide raw starting materials for synthesizing what are unmistakably drugs—for example, the yams from which are extracted the diosgenins that are used as starting material in the manufacture of steroids.
Its Roots in the Ancient World
Beginning with the premise that “[m]an knew only too well that he could not live without food, that food sustained life,” Sigerist postulated: “Physiology began when man … tried to … correlate the actions of food, air, and blood” (Sigerist 1951: 348-9). Ancient Egyptians, for instance, developed an ingenious theory to account for the transformation of food and air into the substance of the human body and to explain disease. Not only did they recognize the hazards of an inadequate food supply but they also knew when to prescribe a normal diet for the sick or injured, and their physicians used many items in the ordinary diet as remedies that would become common in later European cultures (Estes 1989; Manniche 1989).
The Egyptians’ major sweetening agent, honey, was perhaps the most effective of all their medicines. Its efficacy as a wound dressing, attributable chiefly to the desiccating effect on bacteria of its 80 percent sugar content, probably led to its further use in many oral remedies because of its ability to prevent infection; Egyptian medical theory associated even superficial infections with internal disturbances. Modern laboratory studies have shown that honey could, indeed, have inhibited the growth of the bacteria that often contaminate wounds until normal immunological and tissue repair processes had been completed (Estes 1989: 68-71). Because it has the same effect on bacteria as honey, a thick paste of ordinary granulated sugar is sometimes applied to infected wounds today (Seal and Middleton 1991), but honey is unlikely to have been selectively effective as an antimicrobial agent when taken internally.
Although some of the Egyptians’ speculative patho-physiological concepts can be recognized among the Greeks’ explanations of health and disease, it was the latter that directly shaped many aspects of the relationship between food and health that survived for the next 2,500 years. As early as the sixth century B.C., the philosopher-physician Alcmaeon of Croton recognized that the body’s growth depends on its food intake. A century later, the Hippocratic school of physicians described food as one source of the body’s energy and its heat (Winslow and Bellinger 1945). Other sixth-century Greek thinkers tried to explain the conversion of food to parts of the body. Thales of Miletus, for example, in Asia Minor, thought that its primary substance was water, whereas Anaximenes, also of Miletus, argued that it was air. In the fifth century, the Sicilian philosopher Empedocles said that the four irreducible elements—fire, water, earth, and air—were the basic components of both food and body. He and his contemporary Anaxagoras, the first major philosopher to live in Athens, agreed with Alcmaeon that each foodstuff contains particles that are assimilated to specific parts of the body; Empedocles said that each assimilated particle fits exactly into the body part that takes it up (King 1963: 56-68; Leicester 1974:9).
In the fourth century B.C., Aristotle followed Empedocles and Anaxagoras when he postulated that the four elements are blended from four “qualities”—hot, cold, moist, and dry—that are reciprocally paired in each element. He also held that the primary “chemical” reaction of the body is pepsis, a word that has historically been translated into English as “coction,” from the Latin coquere, to cook. In both Latin and English forms, coction implies heating and ripening (of both fruit and morbific matter), and it was associated in English medical texts with perfecting something via natural processes. Aristotle used pepsis in the same way, describing it as the changes a foodstuff undergoes as it is prepared in the gastrointestinal tract for assimilation into the body. In short, both the Greek “pepsis” and the English “coction” were taken to mean “digestion.” Hence, the diagnosis of dyspepsia was a synonym for indigestion. According to Aristotle, normal coction of foods, fueled by the body’s innate heat, thickens the body’s fluids. By contrast, coction is incomplete in the sick, resulting in abnormally thin, or watery, fluids (Leicester 1974: 15-18, 33-5).
In the second century A.D., the physician whose teachings came to dominate Western medical thought for seventeen centuries, Galen of Pergamum, in Asia Minor, reemphasized the teachings of Hippocrates, and postulated that all physiological activity depends on balances described by Aristotle (Winslow and Bellinger 1945; King 1963: 56-68; Leicester 1974: 9). According to Galen’s medical configuration of Aristotle’s physiological model, blood is associated with heat and moisture, phlegm is associated with moisture and cold, black bile is associated with cold and dryness, and yellow bile with dryness and heat.
Moreover, each humor was associated with one of the seasons, just as Posidonius of Rhodes, in about 100 B.C., had associated each with a corresponding temperament (Leicester 1974: 30-2). Blood was now associated with spring and the sanguine temperament, water with winter and the phlegmatic temperament, black bile with autumn and melancholy, and yellow bile with summer and the choleric or bilious temperament. (These terms for the temperaments, however, were not introduced until the twelfth century A.D., by Honorius of Autun.) The humoral theory satisfactorily explained the physiological clues to the balances that had to be rectified in order to restore health and stability to the sick body. At the same time, it permitted the construction of a pragmatic approach to the principles of medical therapy, including dietetics.
These principles were based on the premise that imbalances among the humors could be corrected by administering drugs—or foods—with appropriately opposite properties. Thus, because bilious fevers were associated with dryness and increased body heat, they should be treated with moist, cool remedies in order to rebalance the blood and yellow bile—the humors most disturbed in such patients. Similarly, dropsy, the accumulation of water in the tissues, should be treated with remedies that dried them, such as diuretics. Obviously, many foods could correct imbalances as well as doctors’ drugs could. Cool, moist vegetables, for example, like cucumbers, were well suited to the needs of bilious fever patients.
The Hippocratic texts show that physicians of the fifth century B.C. recognized the influence of nourishment on human health. This was almost certainly not a new idea even then, but these are the earliest such writings to have survived. TheRegimen for Health focuses on the therapeutic value of food, whereas the first section of the Aphorisms describes diet with special reference to illness. It teaches that proper nourishment, because of its influence on digestion, is more important to the sick patient than drugs (Lloyd 1978: 206-10, 272-6). Other authorities added that a carefully chosen diet can rectify disturbances in the balances of heat, cold, dryness, and moisture caused by exhaustion of the energy normally derived from the diet, or by changes in one’s external circumstances, such as sudden wealth or poverty. Remedies were classified medically by the proportions of the same four qualities in each, and foods were further categorized as to whether they were wild or domesticated (Edelstein 1987, orig. 1931). In short, foods, like drugs, were prescribed in order to correct imbalances among the humors or to modify digestive processes that would themselves influence the humors, as far as ancient physicians could tell.
Dietetic instructions were designed to ensure optimum digestion, in order to minimize the amount of undigestible residues that could accumulate in the alimentary tract. Hence, in a physiological echo from the banks of the Nile, Diocles of Carystus, an Athenian physician of the fourth century B.C., cautioned:”The chief meal is to be taken when the body is empty and does not contain any badly digested residues of food” (Sigerist 1987: 238-40, orig. 1961), which, in the Egyptian tradition, were major causes of illness (Estes 1989: 80-91).
Hellenistic medical writers went beyond the Hippocratics when they asserted that a healthy diet is more important than post hoc healing of the sick. But emphasis on merely maintaining a healthy lifestyle receded when it became less feasible in the rapidly changing world of the Roman Empire. And by then, physicians were giving increased attention to diet as a coequal branch of their practice along with drugs and manual operations, such as bleeding—a differentiation that dominated the structure of professional medicine well past the Middle Ages (Siraisi 1981: 252; Edelstein 1987).
Early in the first century A.D., the Roman encyclopedist Celsus again echoed the Egyptian preoccupation with the intestines when he wrote that “digestion has to do with all sorts of troubles.” His dietary guide, in the time-honored Aristotelian tradition, included lists of drugs that heat or cool; those that are least and most readily digested; those that move the bowels or constrict them; those that increase urine output, induce sleep, and draw out noxious materials that cause disease; and those that suppress illness with or without simultaneously cooling the patient. Celsus, however, concluded his pages on dietetics on a note of skepticism:”But as regards all these medicaments, … it is clear that each man follows his own ideas rather than what he has found to be true by actual fact” (Celsus 1935, 1: 77, 207-15).
A century later, Galen taught that most illnesses are caused by “errors of regimen [including diet], and hence avoidable.” He went on to explain how appropriate attention to food, drink, and air can preserve and restore health (Temkin 1973: 40, 154-6). Indeed, his Therapeutic Method required that the physician understand precisely how the four basic physiological qualities of life were mixed in every foodstuff, as well as in every drug (Smith 1979: 103).
The complexities involved are evident in the Materia Medica by Celsus’s near contemporary Dioscorides, a peripatetic Greek physician from Asia Minor (Dioscorides 1934). His book remained the fountainhead of European therapeutics—and dietetics—for 15 centuries, and traces of its influence can be detected even today. Over the years, but especially in the Middle Ages, Dioscorides’s descriptions were altered and expanded by commentators who, as Celsus had feared, relied as much on what they thought was true as on what was evident to the senses. An anonymous sixth-century author cited Hippocrates and Dioscorides in his book on the nutritional medical uses of plant foods, but he, too, found it difficult to differentiate clearly between “foods” and “drugs” (Riddle 1992, orig. 1984).
Dietetics in Medieval Europe
Diet retained its position as one of the three major modes of therapy, along with medicine and surgery, throughout the Middle Ages (Siraisi 1990: 121, 137). Although emphasis on the healthful properties of diet was first systematized by Greek writers, much of it was transmitted back to Europe in Arabic texts of the ninth to eleventh centuries that preserved Hippocratic and Galenic principles (Levey 1973: 33-5).
For instance, the Canon of Medicine, written about A.D. 1000 by the Persian physician Avicenna, made its initial impact on Europe when it was translated into Latin in the twelfth century at Toledo, by Gerard of Cremona. It had become a medical text at Montpellier by at least 1340, which helped to ensure the diffusion of Avicenna’s ideas throughout Europe.
Avicenna perpetuated the Egyptian notion that excess food tends to putrefy, and its Greek corollary that it promotes indigestion and alimentary obstruction. He expanded the teachings of Greek, Roman, and Indian writers with his assertion that foods have medicinal properties that are unrelated to their hot, cold, moist, or dry qualities, while recognizing that some foods have therapeutic properties even when they have no nutritional value. His Canon classifies foods as of rich or poor quality, light or heavy, and wholesome or unwholesome, and includes dietary rules applicable to both health and disease, as well as a diet for the aged (Shah 1966: ix-x, 182-7, 309-20, 338-40, 359-61).
Avicenna simplified Galen’s description when he explained how food is subjected to its first coction in the stomach, resulting in chyle, which passes, via the portal veins, to the liver. There a second coction turns it into yellow bile, black bile, and blood, while its aqueous residue is carried to the kidneys for eventual release from the body. At the same time, the residue of the first, gastric, coction is discharged, partly as phlegm, into the lower intestinal tract. The blood alone is subjected to a third coction, when the heart converts it to what Aristotle had called the vital spirit, and a fourth, in the brain, where it is further transformed into a psychic spirit (Leicester 1974: 59). This satisfying explanation facilitated the physician’s choice of appropriately therapeutic drugs—or foods.
Constantinus Africanus, who died 50 years after Avicenna, translated many Arabic texts at Salerno (site of the first major medical school in Europe) and Monte Cassino. His Book of Degrees complicated therapeutic procedures when he described gradations in the heat, cold, dryness, and moisture of foods and medicines:A food is hot in the first degree if its heating power is less than that of the human body; it is hot in the second degree if its heating power equals that of the body; in the third degree if its heat exceeds that of the body; and in the fourth degree if it is simply unbearably hot. In his translation of a commentary on a book by Galen, Constantinus summarized current thinking about the dietary approach to therapy: “Good food is that which brings about a good humor and bad food is that which brings about an evil humor. And that which produces a good humor is that which generates good blood” (Thorndike 1923, 1: 751; Leicester 1974: 62-5).
By the thirteenth century, Latin translations of Galen’s specifically dietary works, such as De Alimentorum Facultatibusand De Subtiliante Dieta, along with translations of pseudo-Galenic works on the same subjects, such as De Nutrimentoand De Virtutibus Cibariorum, had joined the works of Avicenna and other Arabic writers, and the dietary works of a ninth-century Egyptian writer, Isaac Judaeus, in many medical curricula. One of the most widely consulted late medieval medical manuals was by Peter the Spaniard, or Petrus Hispanus, of Paris (later Pope John XXI, for a year until his accidental death in 1277). His scholastic commentaries on Isaac’s Universal Diets and Particular Diets led him to pose such questions as:”Why does nature sustain a multitude of medicines, but not of foods? … Is fruit wholesome? … Are eggs or meat better for convalescents? … Should paralytics eat fried fish? Are apples good in fevers? … Why do we employ foods hot in the fourth degree and not those cold in the same degree?”
Peter answered such queries with syllogisms premised on gradations in the four basic qualities of foods. At the same time, he disputed the ancient idea that foods could be fully assimilated into the structure of the body (Thorndike 1923, 2: 488-507). However, Peter was not alone in rationalizing medical dietetics. As late as the 1570s, medical faculty and students at Montpellier debated such theses as “[w]hether barley bread should be eaten with fruit,””whether it was safe for persons with kidney trouble or fever to eat eggs,” and “whether dinner or supper should be the more frugal meal” (Thorndike 1941, 6: 222-3).
The prominent attention given to diet in the fourteenth century is exemplified by the physician among the tale spinners who accompanied Chaucer’s pilgrimage to Canterbury:
He was well-versed in the ancient authorities;
In his own diet he observed some measure;
There were no superfluities for pleasure,
Only digestives, nutritives and such (Chaucer 1960: 31).
It was not only physicians who were concerned with the relationship of diet to health in the Middle Ages, however. Medical authorities from Hippocrates to Avicenna were cited in many manuscripts designed for lay use (Thorndike 1940; Bullough 1957), such as several late medieval Tacuina Sanitatis (Handbooks of Health) based on the works of the eleventh-century Arabic physician Ibn Botlân. These manuscripts aimed, among their medical goals, to teach “the right use of food and drink,” the “correct use of elimination and retention of humors,” and how to moderate the emotions associated with each humor (Arano 1976: 6-10).
By the late Middle Ages, the therapeutic benefits of food had entered into the everyday planning of at least the grand households, the only ones for which evidence has survived. Spices, for example, were regarded as both aids to digestion and evidence of a host’s wealth. Medieval herbals and dietaries listed the health-giving properties of foods in the classical humoral tradition. They classified foods and medicines by their degrees of heat, cold, moisture, and dryness. Thus, melons, obviously cold and moist, were suitable for treating patients with fevers. But it may be that this new order resulted from a misreading of the original texts, inasmuch as later writers did not retain it.
Largely under the influence of the Regimen of Health, which codified the prescripts for healthy living taught at Salerno in the eleventh century, physicians devised diets appropriate to both Aristotelian physiology and specific illnesses. Thus, Taddeo Alderotti, professor of medicine at Bologna in the late thirteenth century, recommended to many patients (probably chiefly those with fevers) that they avoid hot bread, cheese, fruits, beef, and pork, in favor of less-stimulating foods. However, in many instances, it is difficult to differentiate between his culinary and medical directions (Siraisi 1981: 293).
The patient’s age was another determinant of therapeutic diets. In 1539, for example, a moderately moist and warm diet was still being recommended for young children, who were viewed as phlegmatic (that is, as very moist and cold). As they grew older, they were thought to become more sanguine or choleric, which meant that they would now benefit from much moister and colder foods. And as their strength declined in old age, they needed foods that were only as moderately moist and warm as those that had benefited them in childhood. Indeed, many foods were more closely associated, in medieval thinking, with illness than with their gustatory effects, although taste was a major clue to the presumed Galenic properties of any drug or food (Drummond and Wilbraham 1958: 65-77; Teigen 1987). In short, the medical correlates of food permeated many aspects of medieval life, from the preparation of patients for surgery to the choice of menus for banquets and bathing establishments (Cosman 1978).
Dietetics during the Scientific Revolution
The rudiments of modern chemistry were beginning to influence medical ideas in the late Middle Ages through the writings of alchemical physicians. In the late fifteenth century, for example, Conrad Heingartner of Zurich cautioned his readers to chew food thoroughly, in order to maximize its digestibility, and not to eat too many courses at one meal, because some foods are more readily digested than others. He also urged that the principal meal be taken at night, because the cold night air aids digestion by forcing the body heat necessary for its completion to stay deep inside the body (Thorndike 1934, 4: 377-8).
It is clear that Heingartner’s concept of the physiology of digestion owed much to Hippocratic- Galenic medical concepts, but the latter were about to disappear in favor of more explicitly chemical explanations. The experimental science that began emerging in the late sixteenth century began, however slowly, to replace the classical Greek and Islamic traditions that had heretofore dominated European thought.
By the late seventeenth century, a major new hypothesis was beginning to hold sway over medical thinking. It postulated that illness represents imbalances not only in the classical four humors but also in the tone—the innate strength and elasticity—of the solid fibrous components of blood vessels and nerves. Both organs were considered to be hollow tubes that propelled their respective contents through the body with forces proportional to the tone of their constituent fibers. That is, the body was healthy when blood or “nerve fluids” could circulate freely, or when sweat, urine, and feces could be expelled freely, and so forth.
Thus, a fast pulse was the distinguishing hallmark of fever, which was interpreted as the result of excessive arterial tone, requiring depletive therapy to bring it under control. Conversely, a slow pulse was interpreted as evidence of a weakness that required stimulant therapy (Estes 1991a). Historians have labeled the new hypothesis as the “solidist theory,” to distinguish it from the older humoral theory. The two concepts were by no means mutually exclusive, however, and most therapeutic effects were interpreted within the frameworks of both. Nevertheless, the older Hippocratic-Galenic focus became progressively less prominent in medical texts.
Dietetics fitted into the new theory as well as into humoralism. By the time solidism had taken hold in medical thinking, the specifically therapeutic role of diet had been clarified by categorizing daily menus as full, moderate (or middle), or low (or thin). As Thomas Muffett wrote in 1655:”The first increaseth flesh, spirits, and humors, the second repaireth onely them that were lost, and the third lessenth them all for a time to preserve life” (Drummond and Wilbraham 1958: 121-2).That is, full diets are appropriate, if not necessary, for maintaining growth and strength in vigorous younger people, whereas moderate diets are more suitable for middle age, and low diets for old age or during illness at any age. Muffett’s description would surely have been recognizable to Aristotle, but within the solidist framework, the low diet was said to be depletive, “sedative,” or antiphlogistic (that is, antifebrile), suitable for reducing excessive tone of the arteries and nerves, whereas the stronger diets were said to be stimulating.
Although the discovery of the Americas put new foods on European tables, medical properties were initially ascribed only to sassafras, sarsaparilla, and the pleasurable stimulating beverages—coffee and chocolate. When John Josselyn returned to London from New England, he reported that American watermelons were suitable for people with fevers, and cranberries for those with scurvy and “to allay the fervour of hot Diseases” (Josselyn 1672: 57, 65-6).Yet such foods could not survive the transatlantic crossing to European markets. Immigrants to colonial New England harvested both “meate and medicine” in their gardens, sometimes from the same plant, in the tradition they had learned before leaving home. Many relied on herbals for descriptions of each plant’s medical properties, which closely followed those of Dioscorides (Leighton 1970: 134-7).
Early in the eighteenth century, Ippolito Francesco Albertini, professor of medicine at Bologna, echoed Avicenna when he wrote that his mother should not eat a food, such as meat,”that is easily converted into blood.” At about the same time, Albertini’s brother-inlaw, Vincenzo Antonio Pigozzi, also a physician, directed a correspondent to strengthen his patient’s stomach with food and medicine because, as Celsus had said, it was “apt to be weak in students and other diligent and well-behaved persons,” so that “their daily food, on account of its coldness, is poorly purified and digested” (Jarcho 1989: 99, 177).
Eighteenth-century British hospitals developed standard diets based on humoral and solidist precepts. A 1782 London hospital dietary mandated the same breakfast for patients on both low and full diets: water-gruel or milk porridge (made of 1.25 ounces of oatmeal, and raisins, in a pint of water or milk). The supper menu was the same, supplemented by the addition of a pint of broth (made by boiling a leg of lamb or other meat for 1.5 hours) four times a week, or a quarter pound of cheese or butter during the week. The full and low diets differed chiefly in their midday dinner menus.
Febrile patients on the low diet received rice milk twice a week. Also twice a week they were served bread pudding (made by soaking a half pound of bread crumbs in one pint of milk overnight and then adding two or three eggs before the mixture was boiled in a bag for a hour or so). Lamb or other meat broth was also on the menu twice a week, or plumb broth (made by boiling six ounces of meat or bone with a half pint of peas and a half ounce of oats in water). A quarter pound of boiled beef or mutton, or roast veal, was added to the menu twice a week, and a 14-ounce loaf of bread was served every day.
By contrast, patients on a full diet received rice milk once a week. A half pound of boiled pudding appeared weekly (made by boiling a mixture of one pound of flour, a quarter pound of suet or meat or eels, and fruits, in 13 ounces of water, for one to two hours). A half pound of beef or mutton was served with greens four times weekly, and a loaf of bread daily. In addition, patients on the low diet received one pint of small beer daily, while those on the full diet were allowed four times as much. The presumed medical qualities of the two diets differed chiefly with respect to the stimulating property of red meat; their total mass was nearly the same (Estes 1990: 66-7). Such diets perpetuated the ancient admonition to “feed a cold and starve a fever,” whose origins are in the Hippocratic texts, and it has remained a guiding therapeutic principle down to our own time.
In 1772, Dr. William Cullen of Edinburgh lamented that although dietetic prescriptions were among the most valuable of all therapies, they had fallen out of regular medical use in recent years (Risse 1986: 220). He might have been surprised when, a few years afterward, his colleague, Dr. Andrew Duncan, Jr., increased the proportions of stimulating foods served to his febrile or otherwise debilitated patients, chiefly by increasing their meat allowances (Estes 1990: 65-8). In fact, by the end of the century, Dr. William Heberden of London was complaining that “[m]any physicians appear to be too strict and particular in the roles of diet and regimen,” and he urged that the sick be allowed to choose their own diets, according to their own tastes (Heberden 1802: 1-5).
In the meantime, experimental chemistry had begun to shed new light on the physiology of digestion and on the respiratory processes involved in the conversion of food into energy, carbon dioxide, and the tissues of the body. Early in the seventeenth century, Jan Baptista Van Helmont, who lived near Brussels, showed that gastric juice is acid, and that it is necessary for the digestion of food. He may even have identified hydrochloric acid as its chief component, but it was not until 1752 that this was proved by René Antoine Ferchault, Sieur de Réamur, in France. Although Van Helmont’s concept of digestion contained some Aristotelian elements, he did show that the acid in gastric juice is neutralized in the duodenum by bile from the liver. However, it was in 1736 that Albrecht von Haller, at Göttingen, established that bile emulsifies fats. Haller’s teacher, Hermann Boerhaave of Leiden, had already laid the groundwork for differentiating what were later named proteins (in 1838) and carbohydrates (in 1844). Haller added the last major food class, fats. But Boerhaave and Georg Ernst Stahl, professor of medicine at Halle, both leading proponents of solidism, denied that gastric juice was acid. Instead, they favored the ancient concept that digestion was a putrefactive or fermentative process, which eventually was abandoned in the face of Réamur’s proof that digestion represents the dissolution of foodstuffs in gastric acid (Drummond and Wilbraham 1958: 232-55; Leicester 1974: 96-7, 118-27; Estes 1991a).
Between 1700 and 1850, when modern experimental pharmacology began to be applied to the study of drug effects on living organisms, physicians trained in the European medical tradition used as drugs about 500 botanical remedies, and 170 chemical compounds and other materials. Any given eighteenth-century physician employed some 125 different botanical remedies in his standard repertoire, depending on his training and experience. Save for the few botanical remedies that had been introduced from the New World by then, the majority of plants prescribed in the eighteenth century had also been used by healers in the ancient Mediterranean world, including most of the medically important foodstuffs shown in the seven tables that follow (Estes 1990). They illustrate the specifically therapeutic roles of familiar culinary ingredients, while reemphasizing the difficulty of discriminating between foods and drugs. These tabulations may not be all-inclusive, and several items could have been included in more than one list. Most of these foods are more fully described elsewhere in this work.
Dietetics and Modern Food Sciences
By about 1900, medical dietetics assumed new forms and goals in the wake of experimental laboratory investigations that permitted physicians to incorporate the concept of metabolic needs into their professional thinking. Although the identification and isolation of vitamins over the following three decades nearly completed that story, daily requirements for carbohydrates, fats, proteins, minerals, and vitamins are still being debated and modified. Many of today’s medical concerns about foods are related to their statistical associations with specific illnesses. While such topics are explored elsewhere in this work, consideration of the transition from old to new concepts of medical dietetics is pertinent here.
The first foods to provide a true cure of any disease were citrus fruits, but the ready response they elicited in scurvy patients was not, at first, recognized as the result of replacing a substance that was missing from their diet. Indeed, James Lind’s celebrated experiment with oranges and lemons on HMS Salisbury in 1747 was not the negatively controlled clinical trial it is often said to have been: He was only comparing six acids, including the fruits, because scurvy was thought to be a kind of fever and, therefore, treatable with almost any cooling acid. Some thought that excess alkalinity caused scorbutic fever, which led to the same therapeutic conclusion. Moreover, Lind did not argue that citrus fruits were the best protection against scurvy; indeed, he continued to recommend several other acids. Marine surgeons had recognized the antiscorbutic value of lemons by at least 1617, but not until 1795 did the British navy adopt limes as standard protection against scurvy (the U.S. Navy followed suit in 1812), as evidence of their efficacy continued to accumulate. Even so, navy surgeons knew only that lemons, limes, and oranges could prevent or cure the disease, not that they replenished the body’s stores of a vital principle. When, in 1784, the Swedish chemist Karl Wilhelm Scheele discovered citric acid in lemons, he erroneously assumed that it was the active therapeutic principle, and it was not until the 1920s that the true antiscorbutic principle, ascorbic acid, was isolated and identified as essential to life (Estes 1979, 1985, 1990: 49, 116).
Studies of the physiology of digestion provided essential stepping-stones to the development of modern dietetic therapeutics early in the nineteenth century. In England, William Prout classified foods into the three major groups that are, in retrospect, recognizable as carbohydrate, protein, and fat. Friedrich Tiedemann and Leopold Gmelin reported from Heidelberg, in 1826, that the ingestion of any kind of food increased the amount of acid in the stomach, and that the acid could dissolve all foods. Some investigators continued to argue that lactic acid was the active principle of digestion, but in 1852 Friedrich Bidder and Carl Schmidt of the German university at Dorpat, Estonia—then emerging as the first academic center for the study of pharmacology—showed that the free hydrochloric acid discovered by Réamur was the only acid in gastric juice (Leicester 1974: 147, 161-3).
The experiments on gastric function performed by U.S. Army surgeon William Beaumont led him to conclude, in 1833, that although animal and grain products are easier to digest than most vegetable foods, the differences are attributable only to the relative sizes of their constituent fibers, not to their other properties. He emphasized this by pointing out that the action of the hydrochloric acid in gastric juice is the same on all foodstuffs (Beaumont 1833: 275-8), as Tiedemann and Gmelin had shown.
By the 1840s, digestive enzymes had been found in saliva, gastric juice, and pancreatic juice, and by 1857 Claude Bernard had demonstrated that the glucose that provides the body’s energy is released from glycogen that has been synthesized and stored in the liver (Leicester 1974: 165-9). It was probably these discoveries that permitted the eventual abandonment of dietetic therapies that had originated in the ancient world, especially when it became apparent that food itself does not alter normal gastric acid or digestive enzyme secretion, although both were eventually shown to be altered in specific diseases.
In 1816, François Magendie of Bordeaux found that dietary sources of nitrogen, especially meat, are essential for health in dogs, and that all the nitrogen necessary to sustain life comes from food, not air (Leicester 1974: 146). His work prompted the studies of Justus von Liebig, of Giessen, in Germany, which by 1840 had laid the foundations for the study of metabolism and other aspects of human nutrition. He showed, for instance, that the nitrogenous substances present in meat and some vegetable foods are assimilated into animal tissues, while carbohydrates like starch and sugar are consumed during oxidative respiration. Liebig’s work began to elucidate the complex chemical interactions of foodstuffs with the fabric of the body (Drummond and Wilbraham 1958: 285-6, 345; Holmes 1973). In 1866, his pupil Carl von Voit showed that carbohydrates and fats supply all the energy used by the body, while its chief nitrogenous components, the proteins, are themselves derived from dietary protein of both plant and animal origin (Leicester 1974: 192).
Perhaps with these discoveries in mind, in 1865 Liebig began to promote an “Extract of Meat” he had devised as a medicine for specific illnesses, such as typhoid fever and inflammation of the ovaries. He marketed it both as a proprietary remedy and, in Germany (but not Britain or the United States), as a prescription drug. Although it was shown almost immediately to lack meat’s nutritive elements, the extract was a success in European, British, and American kitchens for many years afterward, especially when Liebig redirected its advertising toward over-the-counter consumers instead of physicians and pharmacists, who were unconvinced of its therapeutic value. The extract also prompted development of the first commercial formulations of infant foods, which were advertised as promoting growth and preventing disease (Apple 1986; Finlay 1992).
Like Liebig’s Meat Extract and many botanical drugs, several foods were medically important in the eyes of nineteenth-century laymen, even if not in those of contemporary physicians. Some dietary lore was pure mythology, such as the aphrodisiacal properties imputed to nutmegs, tomatoes, quinces, and artichokes (Taberner 1985: 204-6, 60-3). However, many over-the-counter remedies or their ingredients had the imprimatur of regular medicine, as is evident in British and American home-medicine texts.
Such books evaluated not only the nutritive value of foods but also their specific physiological effects in health and disease. By way of example, when J. S. Forsyth prepared the twenty-second edition of William Buchan’s Domestic Medicine, he moved the chapter on diet from near the end to the very beginning, and incorporated into it ideas he had developed in his own Natural and Medical Dieteticon (Buchan 1809: 413-31, 1828: 19-40). Forsyth began by explaining that the “constant use of bread and animal substances excites an unnatural thirst, and leads to the immoderate use of beer and other stimulating liquors, which generate disease, and reduce the lower orders of the people to a state of indigence” (Buchan 1828: 19). Moreover, he said: “The plethoric … should eat sparingly of animal food. It yields far more blood than vegetables taken in the same quantity, and, of course, may induce inflammatory disorders. It acts as a stimulus to the whole system, by which means the circulation of the blood is greatly accelerated” (Buchan 1828: 20). In other words, although a meat diet is best suited to the physiological needs of laborers, it predisposes them to intemperance and poverty.
Forsyth differentiated vegetables and meat in terms of their acidity and alkalinity. He said that plant foods are more acidic and lighter in the stomach; in addition, they mix more readily in the stomach with other foods, and are more constipating, than meat. By contrast, he thought that animal foods display the “greater tendency to alkalescency and putrefaction,” which meant that they might cause diarrhea and dysentery, even if only rarely. Although in Forsyth’s opinion they mixed less well with other foods during digestion, they did help keep the bowels regular. He concluded by pointing out that meat produces “a more dense stimulating elastic blood” than does a vegetable diet, because animal food “stretches and causes a greater degree of resistance in the solids, as well as excites them to stronger action” (Buchan 1828: 39-40).This explained the value of meat to the health of the workingman within the contexts of contemporary chemical knowledge and of the humoral and solidist medical traditions.
Although Buchan and his posthumous editor wrote for British readers in the first instance, their work found receptive audiences in the United States, where do-it-yourself medicine flourished more than in England. In 1830, John Gunn of Knoxville, Tennessee, explained to the people of America (he dedicated his book to President Andrew Jackson) that “[f]ood … is intended to support nature” (Gunn 1830: 124). According to Gunn, because the most nourishing food is of animal origin, it can overheat and exhaust the body, unlike vegetable foods. In this respect he went beyond Forsyth’s caveats. Inasmuch as plant foods cause stomach acidity, flatulence, and debility if they are the only items in the diet, Gunn recommended a diet with balanced amounts of meat and vegetables.
Guides to diet as a way to health proliferated throughout the nineteenth century. Some were mixed with exhortations against alcohol or doctors’ drugs, while others promoted the benefits of physical culture. Gymnastics enthusiast Dio Lewis, for example, published books specifically related to gastrointestinal health, including Talks about People’s Stomachs (Boston, 1870) and Our Digestion; or My Jolly Friend’s Secret (Philadelphia, 1874). The latter title alone suggests that the book focuses on how to avoid the unpleasant sour feeling that characterizes dyspepsia. Similar titles appeared in England, such as W. T. Fernie’s Meals Medicinal: … Curative Foods From the Cook; In Place of Drugs From the Chemist (Bristol, 1905).
Some British writers had been promoting meatless diets for a century before Dr. William Lambe claimed, in 1806, that such diets are not hazardous, and could even cure tuberculosis. His work seems to have prompted Percy Bysshe Shelley, at the age of 21, to write a vegetarian tract that was republished many times between 1813 and 1884. But most physicians, a conservative lot, argued that plant foods are hard to digest, and would have agreed with Forsyth that meat is essential for maintaining strength and vitality (Green 1986: 46-7; Nissenbaum 1988: 45-9).
Thus, by the end of the nineteenth century, scientific studies of the chemistry of foods were being translated into professional texts and domestic health guides written by physicians. Most of the foodstuffs used as medicines during the 25 centuries between the era of Hippocrates and the discovery of pathogenic microbes in the 1870s quietly disappeared from pharmacopoeias in the early twentieth century, as did the vast majority of historic drugs. From then on, until the recent emergence of medical nutrition as a clinical subspecialty, regular medicine nearly abandoned the therapeutic application of diet to disease, save for patients with specific biochemical defects of intermediary metabolism or those with vitamin deficiencies.
In the meantime, however, some of the newly emerging information about foods (and distortions of it) was being exploited by promoters who functioned outside the mainstream of regular medicine, but who well understood the needs of mainstream Americans. They and their followers liked to think of themselves as innovators or “reformers.”
A number of nineteenth-century American health reformers—many of them energetic entrepreneurs—mounted effective populist attacks on both traditional and modern medical ideas simultaneously. Their chief selling point was that their ideas were more prophylactic than curative, a highly liberal position in a proud new republic in which regular medicine was dominated by political conservatives.
The allure of do-it-yourself medicine for the traditionally self-reliant American public was quickened by Samuel Thomson, an itinerant New England healer who, beginning in 1805, found that his system of botanical medicine was more profitable than farming. Although the curative focus of his system revolved around a relatively small number of remedies made from indigenous plants, Thomson also emphasized the importance of proper diet for health. His own version of the Hippocratic concept of the humors presumed that all disease arises from disordered digestion, resulting in insufficient heat for maintaining normal body function (Estes 1992).
Regular physicians were privately incensed at the economic competition inherent in Thomson’s system—one of his marketing slogans was “Every man be his own physician”—but in public they could only charge him with unscientific reliance on a single dogma, his insistence that all disease was caused by cold. Such notions were not entirely new. In 1682 an anonymous French writer who thought the large intestine was the primary seat of all disease had urged his readers to be their own physicians, and to use botanical remedies that grew in their own country (Thorndike 1958, 8: 409). But in the 1680s, such suggestions must have been seen as only adding to the surfeit of emerging medical ideas and not as heresy or competition.
The true fountainhead of the healthy diet in America was Sylvester Graham, a Presbyterian clergyman who preached the physiological benefits of abstinence from alcohol and sex, as well as dietary reliance on homegrown and homemade whole wheat bread. His medical notions were not entirely original, however. They stemmed largely from those of François-Joseph-Victor Broussais, a physician in Napoleonic France who had taught that all disease was caused by an irritation in the gastrointestinal tract. Therefore, said Broussais, almost any illness can be cured by removing the responsible stimulating irritation with bleeding and his version of a low diet.
Because of the historical association of meat eating with potentially pathogenic stimulation, vegetarian writers such as Graham associated avoidance of animal foods, alcohol, and sexual arousal with physical, moral, and spiritual health. The ideal diet he began advocating in the 1830s consisted of two small meatless meals daily that included whole wheat bread and cold water. Graham said that his regimen would preclude the exhaustion and debility that usually follow excessive stimulation (Nissenbaum 1988: 20, 39-49, 57-9, 126-7, 142-3).
His influence has persisted to the present in several guises. Graham certainly fostered the notion that a meat diet was the major cause of dyspepsia (Green 1986: 164). The self-sufficient Shakers, who farmed and sold medicinal herbs, adopted his teachings along with those of Thomson, and sold flour made to Graham’s specifications. However, although the Shakers believed that dyspepsia was the quintessential disease of postwar America, they were not vegetarians themselves (Green 1986: 30-1; Estes 1991b).
The Grahamite proselytizing of Dr. Russell T. Trall has had the most lasting impact of all. He emerged in the 1840s as an energetic and prolific promoter of hydropathy, the water-cure techniques introduced in Austria by Vincenz Preissnitz, who, like Thomson, found healing more lucrative than farming (Armstrong and Armstrong 1991: 81-2). Preissnitz did not preach vegetarianism, but his American disciple did. Trall rationalized his therapeutic methods by updating the ancient four temperaments in what he said was a more “practical” classification.
He associated the nervous temperament with the nervous system, and the sanguine temperament with the arteries and lungs. He explained that these two temperaments were more active, or irritable, than the bilious temperament, which he paired with the veins and musculoskeletal system, or the lymphatic temperament, paired with the abdominal viscera. Thus, Trall described the gastrointestinal tract as torpid and incapable of irritation in persons with a lymphatic temperament (Trall 1852: 287-90).
Although he cited Liebig’s differentiation of nitrogenous from non-nitrogenous foods, Trall based much of his dietary argument on his own reading of Genesis 1:29: “Behold, I have given you every herb bearing seed, which is upon the face of all the earth, and every tree in which is the fruit of a tree yielding seed; to you it shall be for meat.” From this, Trall concluded that “the vegetable kingdom is the ordained source of man’s sustenance” (Trall 1852: 399).
He went on to adduce anatomical, physiological, and “experimental” (actually, testimonial) evidence for the medical efficacy of a vegetable diet. He stated that the secretions of vegetable eaters are “more pure, bland, and copious, and the excretions … are less offensive to the senses,” and that their blood is “less prone to the inflammatory and putrid diatheses.”As a result, their “mental passions are more governable and better balanced,” a conclusion that Graham would have seconded, although he might not have accepted Trall’s association of any temperament with a nonirritable intestinal tract (Trall 1852: 410-12). Neither would Graham have tolerated Trall’s opinion that certain boiled or broiled meats, white fish, and an occasional egg were acceptable. Still, the major item on the menu at Trall’s water-cure establishments was unleavened bread made of coarse-ground, unsifted meal like Graham’s (Trall 1852: 421-4).
Ellen G.White, who began the Seventh-Day Adventist movement in the late 1840s, at the time Graham and Trall were achieving fame, was interested in the teachings of both. The Adventists’ continuing adherence to a vegetable diet today is presumed to be responsible, in large measure, for their lesser risk of major degenerative diseases (Webster and Rawson 1979).
In 1866, Mrs. White opened the Western Health Reform Institute in Battle Creek, Michigan, as an Adventist retreat that served its guests two vegetarian meals a day, modeled on those advocated by Russell Trall, along with the full range of water cures. Ten years later, Dr. John Harvey Kellogg became the Institute’s medical director. There he invented and prescribed the original versions of “Granola” and peanut butter, and the rolled breakfast cereals that he developed in collaboration with his younger brother, Will Keith Kellogg. Not only were cereals nutritious, said the brothers, but they would also counteract the autointoxicants then being accused of causing many illnesses, and facilitate bowel movements.
Charles W. Post, a former patient at the Battle Creek Sanitarium (as J. H. Kellogg renamed it), followed suit with his own “Postum” (1895), “Grape-Nuts” (1898), and “Post Toasties” (1908). The Kelloggs had set up a company to manufacture their products, but it was after the brothers had parted ways that W. K. Kellogg in 1906 introduced “Corn Flakes.” Although the Kelloggs and Post had many competitors among health-food manufacturers in the Battle Creek area alone, their products still dominate the marketplace for breakfast cereals. Their initial success was attributable largely to their advertised medical uses. Like other bran foods, W. K. Kellogg’s “All-Bran” was first marketed as a relief from constipation. In 1931 Nabisco’s “Shredded Wheat,” a competing product, was proclaimed to offer “Strength in Every Shred” (Green 1986: 305-12; Whorton 1988; Armstrong and Armstrong 1991: 99-119).
A few other foods entered the realm of medical usage outside the framework established by Sylvester Graham. The most visible was probably the tomato, one of the most fleeting therapeutic discoveries in the history of medicine. Many Europeans had regarded it as inedible or poisonous after it was brought from the New World in the 1540s, because it belongs to the same family as the deadly nightshade. Its therapeutic value was reported in London as early as 1731, but when sent to North America soon afterward, it was only as an ornamental plant (Smith 1991).
In 1825, Dr. Thomas Sewall reported, in Washington, D.C., that tomatoes could cure bilious disease, probably because they have particles that are yellow, as is jaundice, produced by the accumulation of yellowish bile pigments in the skin as a result of liver or gall bladder disease. Then, in 1835, a brazen medical entrepreneur, John Cook Bennett, began telling Americans that tomatoes could cure diarrhea and dyspepsia. He claimed that Indians used them to promote diuresis, while others said they were good for fevers. The agricultural press spread Bennett’s enthusiasm, and within five years several extracts of tomato had been marketed as typical panaceas (Smith 1991).
An 1865 advertisement claimed that “Tomato pills, will cure all your ills.” However, reports of the hazards of tomatoes, as well as their lack of therapeutic efficacy, had already begun to resurface. Although the merits of tomatoes were being debated as late as the 1896 annual meeting of the American Medical Association, they had long since lost their medical appeal—but by then they had become firmly established in American cookery (Smith 1991).
The role of food as therapy is still central to several modern alternative-healing systems. Naturopathic healers (also known as naturists) cite Samuel Thomson as their chief source of authority, declaring that the “Naturist will be both dietician and herbalist.” They, too, look on the stomach as “in almost every instance the seat of disease,” and like Russell Trall, they cite Genesis 1:29 (Clymer 1960: 12, 33, orig. 1902). Some naturopathic regimens include coffee enemas, administered at two-hour intervals, to cleanse the intestines.
Occasionally, a small number of people will use a food for medical reasons despite evidence of the dangers of such usage. For example, unpasteurized milk has been promoted neither by advertising nor by any organization—its use seems to be inspired chiefly by folklore, supported by its consumers’ supposition that it is a matter of freedom of choice. Those who persist in using raw milk, sometimes illegally, claim that it has more nutritive value than pasteurized milk, that it increases their resistance to disease and enhances their fertility, and that it contains unidentified substances, such as an “antistiffness factor” that helps prevent rheumatism (Potter et al. 1984).
When the word “macrobiotic” entered English usage in the early eighteenth century, it signified a diet or other rules of conduct that would prolong life. Its modern usage appeared in the early 1960s, when a Japanese, Georges Ohsawa, introduced a new concept in dietary therapy. He began with the premise that there is “no disease that cannot be cured by ‘proper’ therapy.” His idea of “proper” therapy, which originated in ancient Chinese ideas of the balance between the complementary forces of yin and yang, was based on white grain cereals and on avoiding fluids. As much as 30 percent of the diet could be meat when the patient began the prescribed regimen, but by the time he or she had progressed through the entire ten-step sequence, which Ohsawa called the “Zen macrobiotic diet,” the patient would be eating only grain products. In the 1970s, Michio Kushi revised Ohsawa’s diet after its inventor had died, but without its meatless extremes. In addition, Kushi developed a cadre of “counselors” whose special training permits them alone to make diagnoses within the system. Many do so by iridology, which associates segments of the iris with specific parts of the body (iridology has also been practiced by chiropractors, among others). Because Ohsawa had said that the macrobiotic diet could cure any disease, it is sometimes resorted to by victims of cancer, who are instructed to chew each mouthful 150 times, to enhance the food’s strengthening yang properties and to preclude overeating; healthy people need chew only 50 times (Macrobiotic diets 1959; AMA Council 1971; Simon, Wortham, and Mitas 1979; Cassileth and Brown 1988).
In the 1920s, a German physician, Max Gerson, developed a dietary cure for cancer based on his belief (which accorded with the pioneering studies of intermediary metabolism by Otto Warburg) that the rate of cancer cell growth depends on imbalances in the aerobic and anaerobic metabolic reactions of the malignant cells. Gerson said his diet would restore those balances to normal by, among other things, increasing potassium intake. It resembles the diet served at Russell Trall’s water-cure establishments, with the addition of calves’ liver, a mixture of special fruit juices, and coffee enemas. Gerson refined his regimen after emigrating to the United States, and it is still available from his heirs in Tijuana, Mexico (American Cancer Society 1990; Green 1992).
Other clinics in Tijuana offer diets that are said to cure cancer by strengthening the immune system and minimizing the intake of potential toxins. Such programs are often accompanied by vitamins, minerals, enzymes, and gland extracts and by stimuli to moral and religious health that are reminiscent of the activities provided at the Battle Creek Sanitarium (American Cancer Society 1991). Several diets and dietary supplements have been marketed without any medicalized rationalization beyond that used to promote five products sold by United Sciences of America, Inc.: “to provide all Americans with the potential of optimum health and vital energy” (Stare 1986).
Modern foods that originated in nineteenth-century efforts to improve the American diet are as healthful as their nutritional content allows. Nevertheless, today many of them—especially breakfast cereals—are fortified with vitamins and minerals, often to enhance their presumed acceptability to consumers. But no one has yet discovered a food or devised a diet that can be proved to cure diseases, such as arthritis, cancer, or any other illness that is not the result of a specific nutritional deficit.
Dietetics in Other Cultures
Important discoveries in the biochemistry of food began to appear in the twentieth century, many of them soon after the discovery of vitamins. Eventually they led to the somewhat contentious discussions of the influence of specific foods on human illness that are a recurrent feature of American culture today. We have few written records that permit detailed comparisons of the dietary medicine practiced in cultures other than those whose roots were in ancient Egypt, Greece, and Rome. But the broad outlines of medical dietetics in a few other cultures can be ascertained.
Among North American Indians
There is evidence that at least some Indian groups of North America had specific ideas of what constituted a proper diet, and that they took dietary precautions when they were sick, such as starving a fever. Colonists reported that New England Indians remained healthy so long as they continued to eat their usual foods, and that they rejected some English foods even when, as hunter-gatherers, they were at the mercy of an uncertain food supply (Vogel 1970: 251-3).
Several Indian societies put plant foods to differing therapeutic uses. Pumpkins were used by the Zunis to treat the skin wounds made by cactus spines, but by the Menominees as a diuretic. New England Indians thought that sarsaparilla was useful as food when they were on the move, whereas the Iroquois used it to treat wounds. The Penobscots of the East Coast and the Kwakiutls of the West Coast used sarsaparilla as a cough remedy, and the Ojibwas applied it to boils and carbuncles (Vogel 1970: 356-61). It seems clear that such uses were as much based on the post hoc ergo propter hocfallacy as were those in the Hippocratic-Galenic tradition. The Aztecs found some uses for the indigenous flora of Mexico that resembled those of their unrelated neighbors north of the Rio Grande. Thus, both Aztecs and the Indians who lived in Texas used the pods and seeds of edible mesquite as a remedy for diseased eyes (Ortiz de Montellano 1990).
In the Far East
The Chinese diet was even more rigidly associated with physiological concepts than that of the Graeco-Roman tradition. Balance between the classical complementary forces of yin and yang dominated the structure and function of the body, and was as important to health as were balances among the Hippocratic humors. The basic digestive processes described in Chinese texts resemble those postulated by Aristotle, although only the latter based some of his conclusions on dissections. The Chinese assumed that food passes from the stomach to the liver, where it yields its vital forces to the muscles, and its gases to the heart (the origin of both blood and animal heat), while its liquid components move to the spleen, then the lungs, and finally the bladder (Leicester 1974: 45-9).
According to the teachings of Huang Ti, the mythical Yellow Emperor said to have reigned about 2600 B.C., but probably not written down before 206 B.C., proper medical treatment includes foods that can correct the patient’s “mode of life.” Because the Chinese recognized five elements—water, fire, wood, metal, and earth—the number five recurs throughout Chinese thought: They associated each of the five major organs—liver, heart, lungs, spleen, and kidneys—with one of the “five grains,” the “five tree-fruits,” the “five domesticated [meat] animals,” and the “five vegetables,” in terms of any organ’s nourishment. In ancient China, as in the medieval European tradition, taste was correlated with a food’s action, but within a different framework: Sweet foods were appropriate to the health of the liver, sour foods to the heart, bitter foods to the lungs, salty foods to the spleen, and pungent foods to the kidneys (Veith 1949: 1-9, 55-8).
The canonical sweet foods of Chinese dietetics (rice, dates, beef, and mallows) were believed to enter the body through the spleen, to produce a slowing effect. Sour foods (small peas, plums, dog meat, and leeks) enter via the liver, and produce an astringent, or binding, effect. Bitter foods (wheat, almonds, mutton, and scallions) enter via the heart, and strengthen, or dry, the body. Salty foods (large beans, chestnuts, pork, and coarse greens) enter through the kidneys, with a softening effect. And, finally, pungent foods (millet, peaches, chicken meat, and onions) disperse the smallest particles of the body after entering it via the lungs. One important precept that followed from these complex relationships was that a patient should avoid foods of the correlative taste whenever he or she had a disease in the organ through which that taste enters the body (Veith 1949: 199-207).The traditional Ayurvedic medical practices of India share many concepts with those of China, although their professional literatures differ in their underlying cosmological premises (Leicester 1974: 51-2).
Chinese medical dietetics loosened its adherence to strict rules as the centuries passed. For instance, although Ko Hung (under the nom de plume Pao-p’u Tzu) described basic principles for constructing healthy diets in about A.D. 326, he did not prescribe specific foods as cures, even if he did maintain that a patient’s blood supply can be increased simply by eating more food (Huard and Wong 1968: 19-23). Yet eventually, as European dietary and medical notions penetrated the Far East, many traditional Chinese remedies disappeared—but not all of them.
By contrast, the medical lore of neighboring Tibet did not succumb to external influence. Along lines that Galen might have approved, Tibetan healers classified patients, in the first instance, as wind-types, bile-types, or phlegm-types, and, secondarily, by habitus, complexion, powers of endurance, amount of sleep, personality, and life span. For each patient type and subtype—they were all mutually exclusive—doctors prescribed specific medicines and foods, although their prescriptions show that they recognized more mixtures of the basic types than might have been supposed from their texts alone (Finckh 1988: 68-73).
Food or Drug? The Example of Garlic
Therapeutic descriptions by medical authorities since Dioscorides are still being cited in order to market some foods that were used as medicine in the ancient world. Such foods are not advertised as remedies; if they were in the United States, at least, they would be subject to the proofs of efficacy and safety required by the Food and Drug Administration. Nevertheless, they illustrate the continuing difficulty of devising mutually exclusive definitions of foods and drugs. The long history of garlic as a medical remedy is not only a case in point; it also exemplifies historical changes in medical theory.
Garlic in Ancient and Medieval Medicine
Garlic is the bulb of Allium sativum, a member of the lily family; the Romans derived the Latin genus name from a Celtic word for pungent bitterness. Ancient Egyptians did not use garlic as a remedy, but they did include it in a foul-smelling amulet designed to keep illness away from children (Sigerist 1951: 283). It was also found among Tutankhamen’s burial goods—as a seasoning (Block 1985; Manniche 1989: 70-1). Modern Sudanese villagers, like some ancient Egyptians, place garlic in a woman’s vagina to determine if she is pregnant; if she is, the characteristic odor will appear in her breath the next day (Estes 1989: 117). Ancient Mesopotamians prescribed garlic for toothache and painful urination and incorporated it in amulets against disease, as did the Egyptians (Levey 1966: 251).Traditional Chinese medicine associated garlic with the spleen, kidneys, and stomach, while Ayurvedic practitioners in India considered it a panacea, even if its specific actions were understood in the humoral and solidist traditions (Hobbs 1992).
The first-century testimonies of Dioscorides and Pliny the Elder provide similar pictures of the use of garlic in Graeco-Roman practice. The nearly complete text of its description by Dioscorides is given here because his book was the bedrock of virtually all herbal lore and prescription writing for the next 15 centuries. The translation is from the first—and still only—English version (1655) of his Materia Medica:
It hath a sharp, warming biting qualitie, expelling of flatulencies, and disturbing of the belly, and drying of the stomach causing of thirst, & of puffing up, breeding of boyles in ye outsyde of the body, dulling the sight of the eyes.… Being eaten, it drives out the broade wormes, and drawes away the urine. It is good, as none other thing, for such as are bitten of vipers, or of the Haemorrhous [hemorrhoids], wine being taken presently after, or else that being beaten small in wine, & soe dranck. It is applyed also by ye way of Cataplasme [a watery poultice] both for the same purposes profitably, as also layd upon such as are bitten of a mad dogge. Being aten, it is good against the chaunge of waters.… It doth cleare the arteries, & being eaten either raw or [boiled], it doth assuage old coughes. Being dranck with decoction of [oregano], it doth kill lice and nitts. But being burnt, and tempered with hon[e]y it doth cure the sugillationes oculorum [black eyes], and Alopeciae [bald spots] being anointed [with it], but for the Alopeciae (it must be applyed) with unguentum Nardinum [an extract ofNardostachys jatamansi]. And with salt & oyle it doth heale [papular eruptions]. It doth take away also the Vitiligenes, & the Lichenes, & the Lentigenes, and the running ulcers of the head, and the furfures [purpural spots], & ye Lepras [other spots, not leprosy], with hon[e]y. Being boiled with Taeda [pine tar] and franckincense, & kept in the mouth it doth assuage the paine of ye teeth. And with figge leaves & [cumin] it is a Cataplasme for such as are bitten of the Mygale [shrew-mouse]. But the leafes decoction is an insession [insertion into the vagina] that brings downe the Menstrua & the Secundas. It is also taken by way of suffumigation [fumigation, in which the patient stands over the burning medicine so that its fumes rise into her vagina] for ye same purpose. But the stamping that is made of it and ye black olive together, called Myrton, doth move the urine & open ye mouths of ye veines & it is good also for the Hydropicall [edematous] (Dioscorides 1934: 188-91).
In short, Dioscorides says that garlic expels intestinal worms and skin parasites, protects against venomous animals, neutralizes internal and external inflammations of many kinds, relieves toothaches and coughs, reduces hemorrhoids, and stimulates menstruation. Most important of all for later writers, garlic removes excess fluid from the body by dilating blood vessels and stimulating the kidneys.
Pliny, on the other hand, was an inquisitive encyclopedist, not a physician. In his Natural History he lists many of the same uses for garlic. But whereas only Dioscorides says that it can move the urine, Pliny adds that garlic can cure epilepsy, cause sleep, stimulate the libido, and neutralize the poisonous effects of aconite and henbane (although such antidotal effects are unlikely) (Manniche 1989: 70-1; Hobbs 1992). Together, he and Dioscorides dictated the therapeutic uses of garlic for centuries.
Thus, al-Kindi, a royal tutor in ninth-century Baghdad, transmitted lessons he had learned from Greek and Roman sources when he said that garlic was good for inflamed ears (Levey 1966: 251). As Arabic works became available in European languages, ancient remedies were systematized within an increasingly rigid humoral framework. Consequently, the entry for garlic in a late-fourteenth-century Tacuinum Sanitatis says: ” Nature: Warm in the second degree, dry in the third. Optimum: The kind that does not have too pungent a smell. Usefulness: Against poisons.Dangers: For the faculty of expulsion, and the brain. Neutralization of the Dangers: With vinegar and oil” (Arano 1976, plates 96-7). Such information changed from time to time, perhaps as doctors changed their minds, but also perhaps because of errors in transcribing manuscripts: A slightly later version of the same health handbook says exactly the same things about garlic, but describes it as warm in the fourth degree.
The most famous surgeon in sixteenth-century Europe, Ambroise Paré, based his therapeutic assessments on his own observations, perhaps because he had not learned his profession by scholastic disputation in a university. He thought garlic’s major value was as a preventive against serious contagions:
Such as by the use of garlick have not their heads troubled, nor their inward parts inflamed, as Countrey people, and such as are used to it, to such there can bee no more certain preservative and antidote against the pestiferous fogs or mists, and the nocturnal obscurity, than to take it in the morning with a draught of good wine; for it being abundantly diffused presently over all the body, fils up the passages thereof, and strengtheneth it in a moment (Paré 1634: 823-4, 1031).
Paré’s therapeutic reasoning is obscure; he seems to have presumed an analogy between garlic and onions, both of which he classified among the hottest of all remedies, those warm in the fourth degree. Surgeons who agreed with his Galenic assumptions quickly adopted an onion poultice Paré had invented on the premise that onions “attract, draw forth, and dissipate the imprinted heate” (Sigerist 1944).
Throughout the seventeenth century, physicians and laymen alike employed garlic as a diuretic and for virtually all the other uses listed by Pliny and Dioscorides, as well as for its ability to protect against contagious diseases (Leighton 1970: 306-7; Hobbs 1992). It was the major ingredient in one of more than 50 prescriptions recommended by the eminent London physician Thomas Willis for treating serious respiratory disease, especially consumption (Willis 1692: 86). As late as the early nineteenth century, physicians still relied on the properties that Dioscorides had ascribed to garlic, as revealed by its description in an influential 1794 compendium of medical practice:
The root applied to the skin inflames.… Its smell is extremely penetrating and diffusive; when the root is applied to the feet, its scent is soon discovered in the breath; and taken internally, its smell is communicated to the urine, or the matter of an issue, and perspires through the pores of the skin. This pungent root stimulates the whole body. Hence, in cold leucophlegmatic habits, it proves a powerful expectorant, diuretic, and if the patient be kept warm, sudorific; it has also been supposed to be emmenagogue. In catarrhous disorders of the breast, flatulent cholics, hysterical, and other diseases proceeding from laxity of the solids, it has generally good effects: it has likewise been found serviceable in some hydropic cases.… The liberal use of garlick is apt to occasion headachs, flatulencies, febrile heats, inflammatory distempers, and sometimes discharges of blood from the haemorrhoidal vessels. In hot bilious constitutions, where there is already a degree of irritation, and where there is reason to suspect an unsound state of the viscera, this stimulating medicine is manifestly improper [contraindicated], and never fails to aggravate the distemper. Garlick made into an ointment with oils, &c. and applied externally, is said to resolve … cold tumors, and has been greatly esteemed in cutaneous diseases. It has likewise been sometimes employed as a repellent. When applied in the form of a poultice to the pubis, it has sometimes proved effectual in producing a discharge of urine, when retention has arisen from a want of due action of the bladder; and some authors have recommended, in certain cases of deafness, the introduction of a single clove, wrapt in thin muslin or gauze, into the meatus auditorius [ear canal] (Edinburgh New Dispensatory 1794: 87-8).
The Dispensatory, which reflects contemporary therapeutic practices at the Royal Infirmary of Edinburgh, also points out that garlic has been reported to be an effective treatment for malaria and smallpox. William Buchan of Edinburgh described a garlic ointment for whooping cough that could be prepared at home:
by beating [it] in a mortar … with an equal quantity of hog’s lard. With this the soles of the feet may be rubbed twice or thrice a day; but the best method is to spread it upon a rag, and apply it in the form of a plaster. It should be renewed every night and morning at least, as the garlic soon loses its virtue (Buchan 1809: 212).
James Thacher presented selected aspects of the Edinburgh professors’ views of garlic in his influential American New Dispensatory (Thacher 1813: 135-6). The first U.S. Pharmacopoeia (1820), which owed much of its content to its Edinburgh counterpart, included garlic among the remedies accepted by the American medical profession. Its 1905 edition listed it for the last time, but garlic as a recommended remedy remained in the United States as late as the 1936 edition of the National Formulary (Vogel 1970: 306-7).
Garlic during the Scientific Revolution
Solidist theories of pathophysiology melded well with the emergence of experimental chemistry in the eighteenth century. In 1822, Jacob Bigelow, a professor of medicine at Harvard, based much of his brief description of garlic’s effects on those of its active principle, recently isolated as an oil:
Garlic and other plants of its genus have a well known offensive odour and taste, which, however, in a weakened state, render them an agreeable condiment with food. These qualities depend on a thick, acrid, yellowish, volatile oil, which may be separated by distillation, leaving the bulbs nearly inert. Garlic is stimulant, expectorant and diuretic. It is given in the form of syrup in chronic coughs, and the secondary stages of pneumonia; also, in combination with other medicines, in dropsy. Externally the bruised bulbs, in the form of a poultice, act as rubefacients (Bigelow 1822: 58).
An important reference text published 50 years later describes the effects of garlic much as both Dioscorides and Bigelow had, but within the more explicit context of solidist physiology:
Its effects on the system are those of a general stimulant. It quickens the circulation, excites the nervous system, promotes expectoration in debility of the lungs, produces diaphoresis or diuresis according as the patient is kept warm or cool, and acts upon the stomach as a tonic and carminative. It is also said to be emmenagogue.… Moderately employed, it is beneficial in enfeebled digestion and flatulence.… It has been given with advantage in chronic catarrh, and other pectoral affections in which the symptoms of inflammation have been subdued, and a relaxed state of the vessels remains.… If taken too largely, or in excited states of the system, garlic is apt to occasion gastric irritation, flatulence, hemorrhoids, headache, and fever. As a medicine, it is at present more used externally than inwardly. Bruised, and applied to the feet, it acts very beneficially, … in disorders of the head; and is especially useful in the febrile complaints of children, by quieting restlessness and producing sleep. Its juice … is frequently used as a liniment in infantile convulsions, and other spasmodic or nervous affections in children (Wood and Bache 1874: 87-9).
Although garlic was not among the materia medica of Thomson (Estes 1992), one of his followers, the prominent eclectic physician John King, described its efficacy as a gastric tonic, as an anthelmintic, and in respiratory illnesses, especially those of children (Hobbs 1992). Despite the Shakers’ unreserved reliance on Thomson’s teachings, they recommended garlic as a stimulating tonic to promote expectoration in upper respiratory conditions, and to promote both diuresis and bowel movements. They also applied it externally to relieve pulmonary symptoms, just as Willis and the doctors of Edinburgh had done (Miller 1976: 177).
Modern herbalists preserve some indications for garlic inherited from the ancient world, including its use in love potions and prophylactic amulets. Others also prescribe it for effects not recognizable among those mentioned in medical works of the past, such as for “life-prolonging powers,” and improved memory and mental capacity (Huson 1974: 32-3, 53-4, 252, 279, 312).
Garlic in the Modern Laboratory
In 1844, Theodor Wertheim, a German chemist, distilled a strongly pungent substance from garlic oil. He called the chemical group associated with the characteristic odor “allyl,” from the plant’s scientific name. Exactly 100 years later, Chester J. Cavillito and his colleagues at Sterling-Winthrop Company laboratories in Rensselaer, New York, discovered the chemical structure of allicin, the compound in garlic that produces its odor.
Four years later, in 1948, Arthur Stoll and Ewald Seebeck, at Sandoz Company laboratories in Basel, isolated alliin (0.9 percent of fresh garlic), the molecule that is biotransformed to allicin (0.1-0.5 percent of fresh garlic, up to 0.9 percent of garlic powder), which represents a doubling of the alliin molecule. The reaction is mediated by the enzyme allinase (in association with the coenzyme pyridoxal phosphate, or vitamin B 6). Garlic does not emit its typical odor until it is crushed. Stoll and Seebeck found that crushing releases allinase from the bulb’s cells, permitting it to act on alliin to produce the odoriferous allicin (as well as ammonia and pyruvate ion).
Finally, in 1983, Eric Block of New York and workers at the University of Delaware and at the Venezuelan Institute of Scientific Investigations in Caracas established the structure of ajoene (ajo is the Spanish word for garlic), formed by the condensation of two molecules of allicin in garlic cloves. Ajoene cannot be detected in proprietary preparations of garlic, only in fresh cloves (Block 1985; International Garlic Symposium 1991: 10-11).
Several potentially therapeutic effects have been attributed to garlic oil and its chemical constituents since the early 1980s. Some preparations reduce plasma concentrations of cholesterol, triglycerides, and low-density lipoproteins to a modest extent, while increasing high-density lipoproteins. These effects seem to be secondary to inhibition of an enzyme necessary for cholesterol synthesis (hydroxy methyl glutaryl coenzyme A reductase), and may be associated with the ability of the same preparations to reduce blood pressure in hypertensive animals and men. An aqueous extract of garlic has been reported to inhibit angiotensin converting enzyme; modern drugs that selectively inhibit that enzyme are highly effective as antihypertensive medicines. Garlic and its extracts also inhibit thrombosis by inhibiting platelet aggregation, decreasing blood viscosity, dilating capillaries, and triggering fibrinolysis (clot breakdown). Ajoene, which is about as potent as aspirin as an antithrombotic compound, blocks platelet fibrinogen receptors. Indeed, it is probably the major antithrombotic factor in garlic juice (Block 1985; Lawrence Review 1988;Auer et al. 1990; Kiesewetter et al. 1990; Mader 1990; Vorberg and Schneider 1990; International Garlic Symposium 1991: 9, 19-44).
In 1858 Louis Pasteur found that garlic has antibacterial properties. Later studies have shown that a highly diluted solution of its juice can inhibit the growth of several important pathogenic bacteria, including Staphylococcus, Streptococcus, Bacillus, and Vibrio cholerae, as well as pathogenic yeasts and other fungi. Since then, garlic has been reported to inhibit the in vitro growth of several species of fungi, gram-positive and gram-negative bacteria, and the tuberculosis bacillus, and to reduce the infectivity of viruses, such as influenza B. Because it is the malodorous allicin that is responsible for garlic’s antimicrobial effects, it is no surprise that the Sandoz Company decided not to develop it as an anti-infective drug following the discoveries of Stoll and Seebeck.
Other studies have demonstrated garlic’s antineoplastic activity in rodents, but this effect may be associated with the trace elements germanium and selenium, rather than with allicin or its metabolites. Finally, recent evidence suggests that garlic decreases plasma concentrations of thyroxine, thyroid stimulating hormone, and glucose (and increases the concentration of insulin) (Block 1985; Lawrence Review 1988; Horowitz 1991; International Garlic Symposium 1991: 28-39; Farbman et al. 1993).
Several preparations of garlic are available in the United States as “health foods,” although no clinical indication other than “goodness” is advertised for them—thus exacerbating the problem inherent in the coined word “nutraceutical.” Most research on the therapeutic effects of garlic has been carried out in Great Britain and Europe, where garlic’s medical value is more widely proclaimed. Some commercial garlic preparations are said to lack the characteristic odor, which means they are probably incapable of the potentially beneficial effects that have been attributed to alliin, allicin, and ajoene. Moreover, even though garlic has been used for culinary purposes for many centuries with no known toxic effects, it is possible that concentrated preparations might have deleterious effects in patients with diabetes or those taking anticoagulant drugs, but no reports of such effects seem to have been published (Lawrence Review 1988). Several sulfur-containing compounds found in fresh garlic have been held responsible for the acute gastroenteritis that ingestion of large amounts of its buds may induce in young children, while chronic ingestion of garlic has been reported to produce goiter by inhibiting iodine uptake by the thyroid gland (Lampe and McCann 1985: 28-9).
Whatever the eventual fate of garlic in pharmacological therapeutics, its history well illustrates how medical concepts have often been adapted to fit newly emerging ideas, even in the absence of any validation of the revised medical notions—other than the evidence implicit in the average adult patient’s 95 percent chance of recovering from any nondevastating illness. Until the twentieth century, physicians had no better evidence on which to base dietetic prescriptions and recommendations. As Marie-François-Xavier Bichat said about two centuries ago:”The same drugs were successively used by humoralists and solidists. Theories changed, but the drugs remained the same. They were applied and acted in the same way, which proves that their action is independent of the opinion of doctors” (Estes 1990: ix). Clearly, Bichat would have included foods—even garlic—within the meaning of his word “drugs.”