Donald J Ortner & Gretchen Theobald. Cambridge World History of Food. Editor: Kenneth F Kiple & Kriemhild Conee Ornelas. Volume 1. Cambridge, UK: Cambridge University Press, 2000.
The quantity and nutritional quality of food available to human populations undoubtedly played a major role in the adaptive processes associated with human evolution. This should have been particularly the case in that period of human history from Mesolithic times to the present when epochal changes took place in the subsistence base of many human societies. In the Near East the domestication of plants and animals began toward the end of the Mesolithic period but became fully developed in the Neolithic. This development included agriculture and pastoralism along with cultural changes associated with greater sedentism and urbanism.
Paleopathology, primarily through the study of human skeletal remains, has attempted to interpret the impact such changes have had upon human health. A recent focus has been on the transition from a hunting and gathering way of life to one associated with incipient or fully developed agriculture (e.g., Cohen and Armelagos 1984b; Cohen 1989; Meiklejohn and Zvelebil 1991). One of the questions being asked is whether greater dependence on fewer food sources increased human vulnerability to famine and malnutrition. The later transition into an increasingly sedentary urban existence in the Bronze and Iron Ages has not been as carefully studied. However, analysis of data from skeletal remains in numerous archaeological sites is providing insight into some of the effects upon nutrition that increasing human density and attendant subsistence changes have had.
In the study of prehistoric health, perhaps the least complex nutritional data comes from human remains that have been mummified. Preservation of human soft tissues occurs either naturally, as in the bogs of northern Europe and very arid areas of the world, or through cultural intervention with embalming methods. Some mummies have provided direct evidence of diet from the intestinal contents of their stomachs (e.g., Glob 1971: 42-3; Fischer 1980: 185-9; Brothwell 1986: 92). However, the most ubiquitous source of data comes from human skeletal remains where the impact of dietary factors tends to be indirect, limited, and difficult to interpret.
Generally, only about 10 percent of a typical sample of human archaeological burials will show any significant evidence of skeletal disease. (Clearly the people represented by the normal-appearing burials died of something, but there are no anatomical features that help determine what this might have been.) Of the 10 percent showing pathology, about 90 percent of their disease conditions resulted from trauma, infection, or arthritis—the three predominant pathological skeletal conditions. All other diseases, including those that might be caused by malnutrition, are incorporated in the residual 10 percent, meaning that even in a large sample of archaeological skeletons, one is unlikely to find more than a few examples of conditions that might be attributable to nutritional problems.
Once a pathological condition due to malnutrition is recognized in bone, correct diagnosis is challenging. Identification begins with those nutritional diseases most commonly known today that can affect the skeleton. These are: (1) vitamin D deficiency, (2) vitamin C deficiency, (3) iodine deficiency, (4) iron deficiency, (5) excessive dietary fluorine, (6) protein-calorie deficiency, and (7) trace element deficiencies.
Care needs to be exercised both in establishing a preferred diagnosis for a pathological condition and in interpreting the diagnoses of others. This is particularly the case in interpreting evidence of malnutrition in archaeological remains. Malnutrition is a general state that may cause more than one pathological condition in the same individual, and it may also be accompanied by other disease conditions, making the pathological profile complex and confusing. For example, scurvy and rickets may appear together (Follis, Jackson, and Park 1940), or scurvy may be associated with iron-deficiency anemia (Goldberg 1963).
All of these issues place significant limitations on reconstructing nutritional problems in antiquity. However, emerging research methods, such as stable isotope analysis and bone histology, evidence from related fields, such as dental pathology, and new areas of research concentration, such as infant skeletal studies, may provide additional data. Analysis of stable isotopes of human bone collagen allows us to determine the balance of food sources between terrestrial animal, marine animal, and plant materials (Katzenberg 1992). Isotope analysis of human hair may provide a more refined breakdown of plant materials eaten over a much shorter period than the 25 to 30 years that bone collagen analysis provides (White 1993: 657). C. D. White (1993: 657), working on prehistoric human remains from Nubia, has claimed that isotopic analysis of hair points to a seasonal difference between consumption of plants such as wheat, barley, and most fruits and vegetables and consumption of the less nutritious plants such as sorghum and millet.
Analysis of bone histology by M. Schultz and fellow workers (Schultz 1986, 1990, 1993; Carli-Thiele and Schultz 1994; Schultz and Schmidt-Schultz 1994) has identified features that assist in differential diagnosis in archaeological human skeletal remains. In one study of human remains from an Early Bronze Age (2500 to 2300 B.C.) cemetery in Anatolia, Schultz (1993: 189) detected no anatomical evidence of rickets in an infant sample. However, microscopic examination revealed a rickets prevalence of 4 percent.
Dental paleopathology provides an additional dimension to understanding nutritional problems. For example, caries rate and location may help identify what type of food was eaten (e.g., Littleton and Frohlich 1989, 1993; Meiklejohn and Zvelebil 1991). Enamel hypoplasias, which are observable defects in dental enamel, may provide information about timing and severity of nutritional stress (Goodman, Martin, and Armelagos 1984; Goodman 1991; Meiklejohn and Zvelebil 1991). Patterns of antemortem tooth loss may suggest whether an individual suffered from a nutritional disease, such as scurvy (Maat 1986: 158), or excess calculus or poor dental hygiene (Lukacs 1989).
Thorough analysis of skeletons of infants and children, which until recently has received minimal attention, can also provide valuable information on the health of a population. Indeed, because of a child’s rapid growth and consequent need for optimal nutrition, immature skeletons will reflect the nutritional status of a population better than those of adults. This is especially the case with diseases such as scurvy, rickets, and iron-deficiency anemia, whose impact is greatest on children between the ages of 6 months and 2 years (Stuart-Macadam 1989a: 219).
In this chapter, we discuss skeletal abnormalities associated with nutritional diseases for which there is archaeological skeletal evidence in various geographical areas and time periods in the Old World. These diseases are: vitamin D deficiency, vitamin C deficiency, iron deficiency, fluorosis, and protein-calorie deficiency. We will focus on anatomical evidence of nutritional disease but will include other types of evidence as it occurs. For a discussion of the pathogenesis of these diseases we refer the reader to other sources (Ortner and Putschar 1981; Resnick and Niwayama 1988).
Vitamin D Deficiency
Vitamin D deficiency causes rickets in children and osteomalacia in adults. In general, these conditions should be rare in societies where exposure to sunlight is common, as the body can synthesize vitamin D precursors with adequate sunlight. In fact, there has been some speculation that rickets will not occur in areas of abundant sunlight (Angel 1971: 89). Cultural factors, however, may intervene. The use of concealing clothing such as veils, the practice of long-term sequestration of women (purdah), or the swaddling of infants (Kuhnke 1993: 461) will hinder the synthesis of vitamin D. Thus, in modern Asia both rickets and osteomalacia have been reported, with the condition attributed to culturally patterned avoidance of sunlight (Fallon 1988: 1994). In the Near East and North Africa cases of rickets have been reported in large towns and sunless slums (Kuhnke 1993: 461).
Vitamin D is critical to the mineralization of bone protein matrix. If the vitamin is not present during bone formation, the protein matrix does not mineralize. Turnover of bone tissue is most rapid during the growth phase, and in rickets much of the newly forming protein matrix may not be mineralized. This compromises biomechanical strength; bone deformity may occur, especially in the weight-bearing limbs, and may be apparent in archaeological human remains.
In the active child, the deformity tends to be in the extremities, and its location may be an indication of when the individual suffered from this disease. Deformities that are restricted to the upper limbs may indicate that the child could not yet walk (Ortner and Putschar 1981: 278), whereas those that show bowing of both the upper and lower limbs may be indicative of chronic or recurring rickets (Stuart-Macadam 1989b: 41). Bowing limited to the long bones of the lower extremities would indicate that rickets had become active only after the child had started walking (Ortner and Putschar 1981: 278).
There is a relatively rare form of rickets that is not caused by a deficiency in dietary vitamin D. Instead, this condition results from the kidneys’ failure to retain phosphorus (Fallon 1988: 1994), and as phosphate is the other major component of bone mineral besides calcium, the effect is deficient mineralization as well. This failure may be caused by a congenital defect in the kidneys or by other diseases affecting the kidneys. The importance of nondietary rickets to this chapter is that the anatomical manifestations in the skeleton are indistinguishable from those caused by vitamin D deficiency.
The adult counterpart of rickets in the skeletal record is osteomalacia, whose expression requires an even more severe state of malnutrition (Maat 1986: 157). Women are vulnerable to osteomalacia during pregnancy and lactation because their need for calcium is great. If dietary calcium is deficient, the developing fetus will draw on calcium from the mother’s skeleton. If vitamin D is also deficient, replacement of the mineral used during this period will be inhibited even if dietary calcium becomes available. As in rickets, biomechanical strength of bone may be inadequate, leading to deformity. This deformity is commonly expressed in the pelvis as biomechanical forces from the femoral head compress the anteroposterior size of the pelvis and push the acetabula into the pelvic canal.
Undisputed anatomical evidence of rickets or osteomalacia in archaeological remains is uncommon for several reasons. First, criteria for diagnosis of these conditions in dry bone specimens have not been clearly distinguished from some skeletal manifestations of other nutritional diseases such as scurvy or anemia. Second, reports on cases of rickets are often based on fairly subtle changes in the shape of the long bones (Bennike 1985: 210, 213; Grmek 1989: 76), which may not be specific for this condition. Third, cases of rickets that are associated with undernourishment are difficult to recognize because growth may have stopped (Stuart-Macadam 1989b: 41).
A remarkable case from a pre-Dynastic Nubian site illustrates the complexity of diagnosis in archaeological human remains. The case has been described by J. T. Rowling (1967: 277) and by D. J. Ortner and W. G. J. Putschar (1981: 284-7). The specimen exhibits bending of the long bones of the forearm, although the humeri are relatively unaffected. The long bones of the lower extremity also exhibit bending, and the pelvis is flattened in the anteroposterior axis. All these features support a diagnosis of osteomalacia, but the specimen is that of a male, so the problem cannot be associated with nutritional deficiencies that can occur during childbearing. An additional complicating feature is the extensive development of abnormal bone on both femora and in the interosseous areas of the radius/ulna and tibia/fibula. This is not typical of osteomalacia and probably represents a pathological complication in addition to vitamin D deficiency.
Cases of rickets have been reported at several archaeological sites in Europe for the Mesolithic period (Zivanovic 1975: 174; Nemeskéri and Lengyel 1978: 241; Grimm 1984; Meiklejohn and Zvelebil 1991), and the later Bronze Age (Schultz 1990: 178, 1993; Schultz and Schmidt-Schultz 1994). Reports of possible cases have also been recorded in the Middle East as early as the Mesolithic period (Macchiarelli 1989: 587). There may be additional cases in the Neolithic period (Röhrer-Ertl 1981, as cited in Smith, Bar-Yosef, and Sillen 1984: 121) and at two sites in Dynastic Egypt (Ortner and Putschar 1981: 285; Buikstra, Baker, and Cook 1993: 44-5). In South Asia, there have been reports of rickets from the Mesolithic, Chalcolithic, and Iron Age periods (Lovell and Kennedy 1989: 91). Osteomalacia has been reported for Mesolithic sites in Europe (Nemeskéri and Lengyel 1978: 241) and in the Middle East (Macchiarelli 1989: 587).
Vitamin C Deficiency
Vitamin C (ascorbic acid) deficiency causes scurvy, a condition that is seen in both children and adults. Because humans cannot store vitamin C in the body, regular intake is essential. As vitamin C is abundant in fresh fruits and vegetables and occurs in small quantities in uncooked meat, scurvy is unlikely to occur in societies where such foods are common in the diet year-round. Historically, vitamin C deficiency has been endemic in northern and temperate climates toward the end of winter (Maat 1986: 160). In adults, scurvy is expressed only after four or five months of total deprivation of vitamin C (Stuart-Macadam 1989b: 219-20).
Vitamin C is critical in the formation of connective tissue, including bone protein and the structural proteins of blood vessels. In bone, the lack of vitamin C may lead to diminished bone protein (osteoid) formation by osteoblasts. The failure to form osteoid results in the abnormal retention of calcified cartilage, which has less biomechanical strength than normal bone. Fractures, particularly at the growth plate, are a common feature. In blood vessel formation the vessel walls may be weak, particularly in young children. This defect may result in bleeding from even minimal trauma. Bleeding can elevate the periosteum and lead to the formation of abnormal subperiosteal bone. It can also stimulate an inflammatory response resulting in abnormal bone destruction or formation adjacent to the bleeding.
Reports of scurvy in archaeological human remains are not common for several reasons. First, evidence of scurvy is hard to detect. For example, if scurvy is manifested in the long bones of a population, the frequency will probably represent only half of the actual cases (Maat 1986: 159). Second, many of the anatomical features associated with scurvy are as yet poorly understood, as is illustrated by an unusual type and distribution pattern of lesions being studied by Ortner. The pattern occurs in both Old and New World specimens in a variety of stages of severity (e.g., Ortner 1984). Essentially, the lesions are inflammatory and exhibit an initial stage that tends to be destructive, with fine porous holes penetrating the outer table of the skull. In later stages the lesions are proliferative but tend to be porous and resemble lesions seen in the anemias. However, the major distinction from the anemias is that the diploë is not involved in the scorbutic lesions and the anatomical distribution in the skull tends to be limited to those areas that lie beneath the major muscles associated with chewing—the temporalis and masseter muscles.
An interesting Old World case of probable scurvy is from the cemetery for the medieval hospital of St. James and St. Mary Magdalene in Chichester, England. Throughout much of the medieval period the hospital was for lepers. As leprosy declined in prevalence toward the end of the period, patients with other ailments were admitted.
The specimen (Chichester burial 215) consists of the partial skeleton of a child about 6 years old, probably from the latter part of the medieval period. The only evidence of skeletal pathology occurs in the skull, where there are two types of lesion. The first type is one in which fine holes penetrate the compact bone with no more than minimal reactive bone formation. This condition is well demonstrated in bone surrounding the infraorbital foramen, which provides a passageway for the infraorbital nerve, artery, and vein. In the Chichester child there are fine holes penetrating the cortical bone on the margin of the foramen with minimal reactive bone formation. The lesion is indicative of chronic inflammation that could have been caused by blood passing through the walls of defective blood vessels.
Another area of porosity is apparent bilaterally on the greater wing of the sphenoid and adjacent bone tissue. This area of porosity underlies the temporalis muscle, which has an unusual vascular supply that is particularly vulnerable to mild trauma and bleeding from defective blood vessels.
The second type of lesion is characterized by porous, proliferative lesions and occurs in two areas. One of these areas is the orbital roof. At this site, bilateral lesions, which are superficial to the normal cortex, are apparent. The surfaces of the pathological bone tissue, particularly in the left orbit, seem to be filling in the porosity, suggesting that recovery from the pathological problem was in progress at the time of death.
The second area of abnormal bone tissue is the internal cortical surface of the skull, with a particular focus in the regions of the sagittal and transverse venous sinuses. Inflammation, perhaps due to chronic bleeding between the dura and the inner table because of trauma to weakened blood vessels, is one possible explanation for this second type of lesion, particularly in the context of lesions apparent in other areas of the skull.
The probable diagnosis for this case is scurvy, which is manifested as a bone reaction to chronic bleeding from defective blood vessels. This diagnosis is particularly likely in view of the anatomical location of the lesions, although there is no evidence of defective bone tissue in the growth plates of the long bones (trümmerfeld zone) as one would expect in active scurvy. However, this may be the result of partial recovery from the disease as indicated by the remodeling in the abnormal bone tissue formed on the orbital roof.
The Chichester case provides probable evidence of scurvy in medieval England. C. A. Roberts (1987) has reported a case of possible scurvy from a late Iron Age or early Roman (100 B.C. to A.D. 43) site in Beck-ford, Worcestershire, England. She described an infant exhibiting porous proliferative orbital lesions and reactive periostitis of the long bones. Schultz (1990: 178) has discussed the presence of infantile scurvy in Bronze Age sites in Europe (2200 to 1900 B.C.) and in Anatolia (2500 to 2300 B.C.) (Schultz and Schmidt-Schultz 1994: 8). In South Asia pathological cases possibly attributable to infantile scurvy have been reported in Late Chalcolithic/Iron Age material (Lukacs and Walimbe 1984: 123).
Iron deficiency is today a common nutritional problem in many parts of the world. Two-thirds of women and children in developing countries are iron deficient (Scrimshaw 1991: 46). However, physical evidence for this condition in antiquity remains elusive, and detection of trends in space and time remain inconclusive.
There are two general types of anemia that affect the human skeleton. Genetic anemias, such as sickle cell anemia and thalassemia, are caused by defects in red blood cells. Acquired anemias may result from chronic bleeding (such as is caused by internal parasites), or from an infection that will lead to a state of anemia (Stuart-Macadam 1989a; Meiklejohn and Zvelebil 1991: 130), or from an iron-deficient diet. Deficient dietary iron can be the result of either inadequate intake of iron from dietary sources or failure to absorb iron during the digestion of food.
Iron is a critical element in hemoglobin and important in the transfer and storage of oxygen in the red blood cells. Defective formation of hemoglobin may result in an increased turnover of red blood cells; this greatly increases demand for blood-forming marrow. In infants and small children the space available for blood formation is barely adequate for the hematopoietic marrow needed for normal blood formation. Enlargement of hematopoietic marrow space can occur in any of the bones. In long bones, marrow may enlarge at the expense of cortical bone, creating greater marrow volume and thinner cortices. In the skull, anemia produces enlargement of the diploë, which may replace the outer table, creating very porous bone tissue known as porotic hyperostosis. Porotic hyperostosis is a descriptive term first used by J. L. Angel in his research on human remains in the eastern Mediterranean (1966), where it is a wellknown condition in archaeological skeletal material. Porotic enlargement of the orbital roof is a specific form of porotic hyperostosis called cribra orbitalia. The presence of both these conditions has been used by paleopathologists to diagnose anemias in archaeological human remains.
Attributing porotic hyperostosis to anemia should be done with caution for several reasons. First, diseases other than anemia (i.e., scurvy, parasitic infection, and rickets) can cause porotic enlargement of the skull. There are differences in pathogenesis that cause somewhat different skeletal manifestations, but overlap in pathological anatomy is considerable. Second, as mentioned previously, some diseases such as scurvy may occur in addition to anemia. Because both diseases cause porotic, hypertrophic lesions of the skull, careful anatomical analysis is critical. Finally, attributing porotic hyperostosis to a specific anemia, such as iron-deficiency anemia, is problematic. On the basis of anatomical features alone, it is very difficult to distinguish the bone changes caused by lack of iron in the diet from bone changes caused by one of the genetic anemias. These cautionary notes are intended to highlight the need for care in interpreting published reports of anemia (and other diseases caused by malnutrition), particularly when a diagnosis of a specific anemia is offered.
Angel (1966, 1972, 1977) was one of the earliest observers to link porotic hyperostosis in archaeological human remains to genetic anemia (thalassemia). He argued that thalassemia was an adaptive mechanism in response to endemic malaria in the eastern Mediterranean. The abnormal hemoglobin of thalassemia, in inhibiting the reproduction of the malarial parasite, protects the individual from severe disease.
As indicated earlier, in malarial regions of the Old World, such as the eastern Mediterranean, it may be difficult to differentiate porotic hyperostosis caused by genetic anemia from dietary anemia. However, in nonmalarial areas of the Old World, such as northern Europe, this condition is more likely to be caused by nongenetic anemia such as iron-deficiency anemia.
Because determining the probable cause of anemia is so complex, few reports have been able to provide a link between porotic hyperostosis and diet. In prehistoric Nubian populations, poor diet may have been one of the factors that led to iron-deficiency anemia (Carlson et al. 1974, as cited in Stuart-Macadam 1989a: 219). At Bronze Age Toppo Daguzzo in Italy (Repetto, Canci, and Borgogni Tarli 1988: 176), the high rate of cribra orbitalia was, possibly, caused by nutritional stress connected with weaning. At Metaponto, a Greek colony (c. 600 to 250 B.C.) in southern Italy noted for its agricultural wealth, the presence of porotic hyperostosis, along with other skeletal stress markers, indicated to researchers that the colony had nutritional problems (Henneberg, Henneberg, and Carter 1992: 452). It has been suggested that specific nutrients may have been lacking in the diet.
Fluorosis as a pathological condition occurs in geographical regions where excessive fluorine is found in the water supply. It may also occur in hot climates where spring or well water is only marginally high in fluoride, but people tend to drink large amounts of water, thereby increasing their intake of fluoride. In addition, high rates of evaporation may increase the concentration of fluoride in water that has been standing (Littleton and Frohlich 1993: 443). Fluorosis has also been known to occur where water that contains the mineral is used to irrigate crops or to prepare food, thereby increasing the amount ingested (Leverett 1982, as cited in Lukacs, Retief, and Jarrige 1985: 187).
In the Old World, fluorosis has been documented in ancient populations of Hungary (Molnar and Molnar 1985: 55), India (Lukacs et al. 1985: 187), and areas of the Arabian Gulf (Littleton and Frohlich 1993: 443).
Fluorosis is known primarily from abnormalities of the permanent teeth, although the skeleton may also be affected. If excessive fluorine is ingested during dental development, dentition will be affected in several ways, depending upon severity. J. Littleton and B. Frohlich (1989: 64) observed fluorosis in archaeological specimens from Middle Bronze Age and Islamic periods in Bahrain. They categorized their findings into four stages of severity: (1) normal or translucent enamel, (2) white opacities on the enamel, (3) minute pitting with brownish staining, and (4) finally, more severe and marked pitting with widespread brown to black staining of the tooth. They noted that about 50 percent of the individuals in both the Bronze Age and the Islamic periods showed dental fluorosis (1989: 68).
Other cases of dental fluorosis have been reported in the archaeological record. At a site in the Arabian Gulf on the island of Umm an Nar (c. 2500 B.C.), Littleton and Frohlich (1993: 443) found that 21 percent of the teeth excavated showed signs of fluorosis. In Hungary, S. Molnar and I. Molnar (1985) reported that in seven skeletal populations dated from late Neolithic to late Bronze Age (c. 3000 B.C. to c. 1200 B.C.), “mottled” or “chalky” teeth suggestive of fluorosis appeared. The frequencies varied from 30 to 67 percent of individuals (Molnar and Molnar 1985: 60). In South Asia, J. R. Lukacs, D. H. Retief, and J. F. Jarringe (1985: 187) found dental fluorosis at Early Neolithic (c. 7000 to 6000 B.C.) and Chalcolithic (c. 4000 to 3000 B.C.) levels at Mehgarh.
In order for fluoride to affect the skeletal system, the condition must be long-standing and severe (Flemming Møller and Gudjonsson 1932; Sankaran and Gadekar 1964). Skeletal manifestations of fluorosis may involve ossification of ligament and tendon tissue at their origin and insertion. However, other types of connective tissue may also be ossified, such as the tissue at the costal margin of the ribs. Connective tissue within the neural canal is involved in some cases, reducing the space needed for the spinal cord and other neurological pathways. If severe, it can cause nerve damage and paralysis.
Fluorosis may also affect mineralization of osteoid during osteon remodeling in the microscopic structure of bone. In contrast with the ossification of ligament and tendon tissue, excessive fluorine inhibits mineralization of osteoid at the histological level of tissue organization. It is unclear why, in some situations, fluorosis stimulates abnormal mineralization, yet in other situations, it inhibits mineralization. In microradiographs, inhibited mineralization is seen as a zone of poor mineralization.
Examples of archaeological fluorosis of bone tissue are rare. However, in Bahrain, excavations from third to second millennium B.C. burial mounds have revealed the full range of this disease (Frohlich, Ortner, and Al-Khalifa 1987/88). In addition to dental problems, skeletons show ossification of ligaments and tendons, and some exhibit ossification of connective tissue within the neural canal. The most severe case is that of a 50-year-old male who had a fused spine in addition to large, bony projections in the ligament attachments of the radius and ulna and tibia and fibula. Chemical analysis indicates almost 10 times the normal levels of fluorine in bone material.
Protein-energy malnutrition (PEM), or protein-calorie deficiency, covers a range of syndromes from malnutrition to starvation. The best-known clinical manifestations are seen in children in the form of kwashiorkor (a chronic form that is caused by lack of protein) and marasmus (an acute form where the child wastes away) (Newman 1993: 950). PEM occurs in areas of poverty, with its highest rates in parts of Asia and Africa (Newman 1993: 954).
PEM has no specific skeletal markers that enable us to identify it in skeletal remains. It affects the human skeleton in different ways, depending on severity and age of occurrence. During growth and development it may affect the size of the individual so that bones and teeth are smaller than normal for that population. There may be other manifestations of PEM during this growth period, such as diminished sexual dimorphism, decreased cortical bone thickness, premature osteoporosis (associated with starvation), enamel hypoplasias, and Harris lines. Because malnutrition decreases the immune response to infection, a high rate of infection may also indicate nutritional problems. Unfortunately, most of the indicators of growth problems in malnutrition occur in other disease syndromes as well; thus, careful analysis of subtle abnormalities in skeletal samples is needed. Chemical and histological analyses provide supporting evidence of abnormal features apparent anatomically.
PEM is probably as old as humankind (Newman 1993: 953). Written records in the Old World over the pastx 6,000 years have alluded to frequent famines. Beginning around 4000 B.C. and ending around 500 B.C., the Middle East and northeastern Africa, specifically the Nile and Tigris and Euphrates river valleys, were “extraordinarily famine prone” (Dirks 1993: 162). The skeletal evidence in archaeological remains is based on a number of skeletal abnormalities that, observers have concluded, are the result of nutritional problems.
Several studies suggesting problems with nutrition have been undertaken in northeastern Africa. In reviewing 25 years of work done on prehistoric Nubian skeletal material, G. J. Armelagos and J. O. Mills (1993: 10-11) noted that reduced long bone growth in children and premature bone loss in both children and young women were due to nutritional causes, specifically to impaired calcium metabolism. One of the complications of PEM in modern populations is thought to be interference with the metabolism of calcium (Newman 1993: 953). In Nubia, reliance on cereal grains such as barley, millet, and sorghum, which are poor sources of calcium and iron, may have been a major factor in the dietary deficiency of the population (Armelagos and Mills 1993: 11). In a later Meroitic site (c. 500 B.C. to A.D. 200) in the Sudan, E. Fulcheri and colleagues (1994: 51) found that 90 percent of the children’s skeletons (0 to 12 years old) showed signs of growth disturbances or nutritional deficiencies.
There are signs of malnutrition from other areas and time periods as well. In the Arabian Gulf, the Mesolithic necropolis (c. 3700 to 3200 B.C.) of Ra’s al Hamra revealed skeletal remains of a population under “strong environmental stress” with numerous pathologies, including rickets, porotic hyperostosis, and cribra orbitalia (Macchiarelli 1989). In addition, indications of growth disturbances in the form of a high rate of enamel hypoplasias and a low rate of sexual dimorphism have led to the conclusion that part of this stress was nutritional (Coppa, Cucina, and Mack 1993: 79). Specifically, the inhabitants may have suffered from protein-calorie malnutrition (Macchiarelli 1989: 587).
At Bronze Age (third millennium B.C.) Jelsovce in Slovakia, M. Schultz and T. H. Schmidt-Schultz (1994: 8) found “strong evidence of malnutrition” for the infant population, but noted that the relatively high frequency of enamel hypoplasia, anemia, rickets, and scurvy, in addition to infection, was not typical for the Bronze Age. Nutritional stress has also been suggested by the presence of premature osteoporosis among the pre-Hispanic inhabitants of the Canary Islands (Reimers et al. 1989; Martin and Mateos 1992) and among the population of Bronze Age Crete (McGeorge and Mavroudis 1987).
A review of the literature combined with our own research and experience leaves no doubt in our minds that humans have had nutritional problems extending at least back into the Mesolithic period. We have seen probable evidence of vitamin C deficiency, vitamin D deficiency, iron-deficiency anemia, fluorosis, and protein-energy malnutrition. However, because the conditions that cause malnutrition may be sporadic or even random, they vary in expression in both time and space. The prevalence of nutritional diseases may be due to food availability that can be affected by local or seasonal environment. For example, crop failure can result from various factors, such as shortage of water or overabundance of pests. Other nutritional problems can be caused by idiosyncratic circumstances such as individual food preferences or specific cultural customs.
Culture affects nutrition in influencing the foods that are hunted, gathered, herded, or cultivated, as well as the ways they are prepared for consumption. Cultural traditions and taboos frequently dictate food choices. All these variables affecting nutrition, combined with differences in observers and the varying methodologies they use in studying ancient human remains, make finding diachronic patterns or trends in human nutrition difficult.
Whether or not humankind benefited from or was harmed by the epochal changes in the quality and quantity of food over the past 10,000 years is, in our opinion, still open to debate. Many studies of skeletal remains conclude that the level of health, as indicated by nutrition, declined with the change from the Mesolithic hunter-gatherer way of life to the later period of developed agriculture. M. N. Cohen and G. J. Armelagos (1984a: 587), in summing up the results of a symposium on the paleopathology of the consequences of agriculture, noted that studies of both the Old and New Worlds provided consistent evidence that farming was accompanied by a decline in the quality of nutrition. Other, more recent studies have indicated agreement with this conclusion. A. Agelarakis and B. Waddell (1994: 9), working in southwestern Asia, stated that skeletal remains from infants and children showed an increase in dietary stress during the agricultural transition. Similarly, N. C. Lovell and K. A. R. Kennedy (1989: 91) observed that signs of nutritional stress increased with farming in South Asia.
By contrast, however, in a thorough review of well-studied skeletal material from Mesolithic and Neolithic Europe, C. Meiklejohn and M. Zvelebil (1991) found unexpected variability in the health status of populations connected with the Mesolithic-Neolithic transition. Part of this variability was related to diet, and they concluded that for Europe, no significant trends in health were visible in the skeletons of those populations that made the transition from hunting and gathering to greater dependence on agriculture, and from mobile to relatively sedentary communities. Although some differences between specific areas (i.e., the western Mediterranean and northern and eastern Europe) seem to exist, deficiencies in sample size mean that neither time- nor space-dependent patterns emerge from their review of the existing data. Clearly, different observers interpret evidence on the history of nutritional diseases in somewhat different ways. This is not surprising given the nature of the data. The questions about the relationship of malnutrition to changes in time and space remain an important scientific problem. Additional studies on skeletal material, particularly those that apply new biochemical and histological methods, offer the promise of a clearer understanding of these issues in the near future.