Mark Nathan Cohen. Encyclopedia of Food and Culture. Editor: Solomon H Katz. Volume 1. New York: Charles Scribner’s Sons, 2003.
The last thirty years have seen a revolution in our understanding of the origins of agriculture. What was once seen as a pattern of unilateral human exploitation of domesticated crops and animals has now been described as a pattern of coevolution and mutual domestication between human beings and their various domesticates. What was once seen as a technological breakthrough, a new concept, or “invention” (the so-called Neolithic revolution) is now commonly viewed as the adoption of techniques and ultimately an economy long known to foragers in which “invention” played little or no role. Since many domesticates are plants that in the wild naturally accumulate around human habitation and garbage, and thrive in disturbed habitats, it seems very likely that the awareness of their growth patterns and the concepts of planting and tending would have been clear to any observant forager; thus, the techniques were not “new.” They simply waited use, not discovery. In fact, the concept of domestication may have been practiced first on nonfood crops such as the bottle gourd or other crops chosen for their utility long before the domestication of food plants and the ultimate adoption of food economies based on domesticates (farming).
The question then becomes not how domestication was “invented” but why it was adopted. What was once assumed to depend on cultural diffusion of ideas and/or crops is now seen by most scholars as processes of independent local adoption of various crops.
Patterns of Domestication
The domestication of the various crops was geographically a very widespread series of parallel events. Some scholars now recognize from seven to twelve independent or “pristine” centers in which agriculture was undertaken prior to the diffusion of other crops or crop complexes (although many of these are disputed) scattered throughout Southwest, South, Southeast, and East Asia; North Africa and New Guinea; North, Central, and South America; and possibly North America. As the earliest dates for the first appearance of cultigens are pushed back; as individual “centers” of domestication are found to contain more than one “hearth” where cultivation of different crops first occurred; as different strains of a crop, for example, maize or rice, are found to have been domesticated independently in two or more regions; as an increasing range of crops are studied; and, as little-known local domestic crops are identified in various regions in periods before major crops were disseminated, the number of possible independent or “pristine” centers of domestication is increasing, and the increase seems likely to continue.
Early Domestication of Crops
Combining patterns provided by various scholars (see the bibliography) suggest that major domesticates appear in Southwest Asia or in the Near East (wheat barley, lentils) by 9,000-12,000 B.P. or even earlier; in Thailand (rice) between 12,000 and 8,000 B.P); in China (millet, soybeans, rice) ca. 9,500 B.P.; in Mesoamerica (squash, beans, and maize) between 10,000 B.P. and 5,500 B.P.; in South America (lima beans and peppers) by ca. 8,000-10,000 B.P. and, with less certainty, potatoes and manioc by 6,000 B.P.; and in North America north of Mexico (sunflowers, may grass, chenopods, sump weed, and marsh elder) by 4000-5000 B.P.; in North Africa (pearl millet, sorghum) by 5500-6800 B.P.; in Southeast Asia (taro ) by 8000 B.P. and possibly much earlier. (Root crops are presumed to have had even longer histories of domestication in the moist tropics but they are poorly preserved and difficult to document archaeologically.)
As an example of the regional complexity of incipient domestication, there may have been three centers of domestication at three altitudes in South America: a lowland complex involving manioc and sweet potato; a midelevation complex involving amaranth, peanut, jicama, and coca; and a high-elevation group including potato and other lesser tubers such as ullucu.
The agriculture of particular preferred crops also spread widely by diffusion or population movement in some areas in the prehistoric period. In perhaps the best known patterns of diffusion of agricultural economies (or displacement of indigenous hunter-gatherer populations), Middle Eastern farming economies had spread to Bulgaria by 7500 B.P.; to Italy by 7000 B.P.; and to Britain by 6000-5000 B.P. Maize diffused very widely in North and South America from Mesoamerica (apparently without the significant spread of people); and rice cultivation diffused throughout South, East, and Southeast Asia.
Despite its geographical dispersal, the adoption of the various domestic crop economies occurred within a narrow time span, between about 10,000 and 3,000 B.P. The human population entered a relationship with many different plants at about the same time, implying that human activities were the prime motivator of major economic change and entry into mutual domestication in each instance.
Domestication (genetic manipulation of plants) and the adoption of agricultural economies (primary dependence on domestics as food), once seen as an “event,” are now viewed as distinct from one another, each a long process in its own right. There is often a substantial time lag between incipient domestication of a crop and actual dependence on it. That is, the adoption of farming was a gradual quantitative process more than a revolutionary rapid adoption—a pattern of gradually increasing interaction, and degrees of domestication and economic interdependence.
Moreover, the adoption of agriculture was, by all accounts, the coalescence of a long, gradual series of distinctive and often independent behaviors. Techniques used by hunter-gatherers to increase food supplies, long before farming, included the use of fire to stimulate new growth; the protection of favorite plants; sowing seeds or parts of tubers without domestication; preparing soils; eliminating competitors; fertilizing; irrigating; concentration of plants; controlling of growth cycles; expansion of ranges; and ultimately domestication. By this definition, domestication means altering plants genetically to live in proximity to human settlements, enlarging desired parts, breeding out toxins, unpleasant tastes, and physical barriers to exploitation—in short, getting plants to respond to human rather than natural selection.
Dependence on Crops
Almost all authorities describe a gradual increase in the quantitative dependence on domesticated crops. Most also see a quantitative shift from high-quality to low-quality resources (a reduction in the variety and density of essential nutrients, calories, protein, vitamins, minerals, and fatty acids per unit of bulk, desirability of foods, and ease of exploitation). Most also describe a movement downward in the trophic levels of foods exploitated A common theme in almost all discussions of the origins of agriculture is the idea of increasingly “intensive” exploitation of foods (the use of increased labor to exploit smaller and smaller areas of land).
This sequence of events commonly first involved a focus on an increasing range (a “broad spectrum”) of lowpriority wild resources, increasing the efficiency in which space was utilized—a shift from economies focused on comparatively scarce but otherwise valuable large animals and high-quality vegetable resources to one in which new resources or different emphases included smaller game, greater reliance on fish and shellfish, and a focus on low-quality starchy seeds. There is a clear and widespread appearance of and increase in apparatus (grindstones for processing small seeds, fishing equipment, small projectile points) in most parts of the world before the adoption of agriculture, which cannot be a function of differential preservation.
Ultimately the spectrum of exploitation seems to have narrowed again as populations shifted toward more complete modification of landscapes to permit increased dependence on particular low-priority but calorically productive starches that could be obtained in large quantities per unit of space and then stored. Such modification of the land to focus on the quantity of calories per unit of space by promoting staple crops would then eliminate some calorically marginal foods, resulting in a loss of dietary variety and of some nutrients. (The major staples—rice, maize, wheat, barley, potatoes, sweet potatoes, manioc, and taro—all cause dietary deficiencies when relied on too heavily as the sole basis of a diet. The deficiencies are likely to be exacerbated by dry storage, which destroys C and B-complex vitamins.)
Intensification of Resource Use
The intensification can probably be seen best through the eyes of optimal foraging theory, much of which has focused on caloric returns for each unit of labor provided by various foods, and has argued that human groups will go first for high-ranking resources (those that yield high returns for a unit of work including preparation). Repeated studies of comparative efficiency of food-gathering techniques in various parts of the world have routinely reported that human populations should prefer resources such as large game, which, when available, can be exploited with great efficiency. Populations turn to increasing reliance on lower-ranking, that is, less efficiently exploited, resources (small game, shellfish, most nuts, individually caught fish, and small seeds) only as those of higher rank disappear or become so scarce that the time involved in finding them becomes prohibitively high (for example, as large game becomes scarce). Such calculations by scientists do not predict the behavior of individual populations perfectly (presumably because other factors such as local food preferences or inertia come into play). But they do dramatically conform to the broad trends in prehistory relating to a Paleolithic-Mesolithic-Neolithic sequence or its equivalents in the New World. And they suggest that this sequence of economic changes is one of declining efficiency in resource exploitation.
Resources used increasingly in the intensification of individual units of land—the so-called broad-spectrum revolution—typically provided fewer calories per unit of labor than the comparative emphasis on large animal exploitation that preceded them. Small seeds such as wheat, barley rice, and maize typically are among the least productive resources (and among the lowest priority as well in taste and quality) and would presumably have come into use only as preferred resources disappeared or became prohibitively scarce. Seasonal seeds, although potentially harvested quickly and in large quantity, typically involved intensive processing and the labor of storage as well as significant storage losses. A significant point is that the adoption of low-ranking resources depended not on their availability but on the declining availability of higher resources. Cereals were adopted not because they were or had become available but because preferred resources such as large game were becoming less available.
The major cereals are relatively inefficient to exploit and process, as demonstrated by Kenneth Russell with specific reference to the Middle Eastern cradle of wheat and barley cultivation. Agriculture, therefore, may not have been “invented” so much as adopted and dropped repeatedly as a consequence of the availability or scarcity of higher-ranked resources. This pattern may in fact be visible among Natufian, or Mesolithic, populations in the Middle East whose patterns of exploitation sometimes appear to defy any attempt to recognize, naively, a simple sequence of the type described above.
Technological changes were motivated by necessity or by demand, not by independent invention or technological advance. In a trend toward declining efficiency, one does not adopt new technologies simply because they are available. Such innovations may well be held in reserve until changing conditions render them the best remaining alternatives. Demand-side economics seems to have powered most of economic history; Malthusian or supply-side economics, with supply independent from demand, became the predominant pattern only when the rise of social classes prevented the needs of the poor from generating any economic “demand”—which implies not only need but entitlement (the ability to command resources).
Various sources point out, however, that such “intensification” occurred in parallel among incipient farmers and populations such as those of the West Coast of the United States, which developed an intense focus on storing starchy staples (such as acorns) but never domesticated them. The two activities may be distinguished, and centers of origin of domestication may be defined, less by human knowledge or intent as by the flexibility or recalcitrance of the intensively harvested plants toward incipient domestication. Some resources such as wheat respond readily to human manipulation; others such as acorns/oak trees defy it.
Increased demand results from population growth, climate change, and socially induced demand. Mark Cohen argues that population growth and increasing population density—or a combination of population growth and declining availability of preferred foods, which result in “population pressure” or an imbalance between population, resources, and prevalent food choices and extractive strategies—may be the main trigger of relevant economic changes. Such increasing density is ultimately traceable to the Pleistocene with the gradual density-dependent closing of cultural systems as increased density permitted groups to switch from exogamy to endogamy. According to this model, the widespread parallelism of different regions is based on the power of population flux (movement between groups) to equalize population pressure from region to region. The model has been criticized for, among other reasons, relying too much on flux as an explanatory necessity, for having the wrong time scale, and for underplaying the role of climate change.
A second category emphasizes the role of post-Pleistocene climate change in both facilitating and demanding exploitation of plants amenable to domestication. It has been argued, in fact, that farming would have been essentially impossible during the Pleistocene, but almost mandatory, at least in a competitive sense, in the Holocene. This model may provide a more powerful explanation of the regional parallelism of intensification in time than a purely population growth/flux model. The climate-based model has been criticized, however, as ignoring the fact that climate and environmental changes are zonal and therefore could not, of themselves, produce parallel economic changes in different environments undergoing different kinds of change.
A third major category that explains increased demand suggests that it resulted from enhanced social and political demand preceding and accompanying intensification. The problem is that such explanations, unless combined with data on population growth or climate change, fail to explain the parallel emergence of complex social forms.
Agriculture and the Decline in Health, Nutrition, and Food Security
Agriculture commonly has been associated with a number of social features: reduced territories, more marked social boundaries, further closing of mating systems; greater territoriality and formal definitions of property; complex social and political organization; more defined concepts of property; food storage; and sedentism. Moreover, agriculture has until recently been considered the cause or enabler of these altered social institutions. These features are only loosely bound, may be separated by long spans of time, and may occur in any of various sequences. For example, sedentism in many regions occurs long before domestication (as in parts of the Middle East), but in the New World the reverse often occurs—domesticates appearing long before settled reliance on those domesticates. Social complexity may commonly follow the origins of agriculture but precedes it in many parts of the world and, as mentioned above, occurs without domestication in some parts of the world.
Changes in Health
What was once interpreted by researchers as a transition toward improving human health, nutrition, reliability of the food supply, greater ease of food procurement, and greater longevity is now viewed as the start of declining health, nutrition, and efficiency of labor, probably declining longevity, and perhaps even declining security of food supplies. It is now commonly accepted that the adoption of farming economies and sedentism resulted in declining health and nutrition. The conclusion is based on triangulation from three sources: contemporary observation of hunting and gathering versus farming societies; theoretical patterns of nutrients and parasites in nature; and paleopathology, the analysis of health and nutrition in prehistoric skeletons representing different periods of prehistory. Many sources have found parallel trends toward declining health in prehistoric populations but challenges to quantitative methods, interpretations of some evidence, and some specific conclusions in paleopathology have been offered. Observed paleopathological trends commonly accord with expectations from other lines of evidence.
It seems probable from epidemiological considerations—and it is clear from paleopathology—for example, that farming, large concentrations of population and sedentism, the accumulation of human feces, and the attraction of stored foods to potentially disease-bearing animals markedly increased parasite loads on human populations. The increase in the prevalence of visible periostitis, osteomyelitis, treponemal infection, and tuberculosis in skeletal populations conforms both to ethnographic observations and models of probable disease history. The reduction of wild animal meat in the diet with the increasing focus on vegetable foods may initially have reduced the likelihood of food-borne diseases (of which animals are the major source). But the domestication of animals, their crowding, and their continuing proximity to human populations are likely to have raised meat-borne infections to new highs and seems responsible for epidemic diseases in human populations, many of which began as zoonotic (animal-borne) disease shared by people and domestic animals.
Consequences of Agriculture
Sedentism and farming resulted in declining quality of nutrition (or at least in the decline in the quality of nutrients available to the human populations). Indeed, some researchers have extolled the virtue of hunter-gatherer diets. Agriculture is likely to have resulted in a marked downturn in food diversity and food quality, and ultimately to a decline in nutrition. An increase in cumulative neurotoxins may have occurred as farming was adopted, the latter despite the fact that domestication itself may have bred toxic substances out of foods.
Agriculture also seems to have resulted in a change in the texture of foods toward softer foods, resulting in a decline in tooth wear but an increase in dental caries and a reduction in jaws and jaw strength. A significant advantage of soft foods based on boiling in ceramic pots, a practice largely restricted to sedentary populations, may have been the increasing potential for early weaning of children and improved food for toothless elders. But early weaning to cereals as opposed to a diet of mother’s milk is well known to have serious negative effects on childhood nutrition, infection, and survival.
A dramatic increase in iron deficiency anemia (porotic hyperostosis and cribra orbitalia) is associated everywhere in the archaeological record with both sedentism, infection, and new crops. The trend is also predictable in nature, and may be observed in contemporary populations. The increased anemia probably resulted primarily from a large increase in iron-robbing hookworm associated with sedentism and with the sequestering by the body of its own iron as protection against bacterial disease.
The declining health that came with the advent of farming is also reflected in (but not universally) childhood declines in stature, osteoporosis in children, decreases in tooth size (as a result of declining maternal nutrition), and tooth defects.
Whether the adoption of broad-spectrum foraging, agriculture, storage, and sedentism increased or decreased the reliability of food supplies (and whether sedentism is itself a consequence of choice permitted by new resources or necessitated by them) is a matter of some debate. For example, it is not clear whether broad-spectrum foraging increased reliability by expanding the resource base, or decreased reliability by focusing exploitation on what had once been emergency resources.
Domestication, sedentism, and storage appear to have evened out potential seasonal shortages in resources, but they may also have reduced the reliability of the food supply by decreasing the variety of foods consumed; by preventing groups from moving in response to shortages; by creating new vulnerability of plants selected for human rather than natural needs; by moving resources beyond their natural habitats to which they are adapted for survival; and by the increase in post-harvest food loss through storage—not only because stored resources are vulnerable to rot, or theft by animals, but stores are subject to expropriation by human enemies. One possible biological clue to the resolution of this problem is that signs of episodic stress (enamel hypoplasia and microdefects in teeth in skeletal populations) generally become more common after agriculture was adopted.
Sedentary agriculture seems likely to have increased human fertility through a variety of mechanisms, including the shifting work loads for women; calorically richer diets; sedentism; and the increased marginal utility of children or the increased availability of weaning foods. Some researchers estimate that during the Mesolithic-Neolithic transition in the Iberian Peninsula fertility may have increased as much as from four to six live births per mother, which would imply very rapid acceleration of population growth. If, in fact, fertility on average increased (possibly significantly) but population growth on average accelerated only by the trivial amount calculated below, then life expectancy must on average have declined (since growth rates are a balance of both fertility and mortality). (There is little evidence from paleopathology that the adoption of sedentary farming increased on average human life expectancy and little reason to expect that it did.)
For whatever reasons, essentially all estimates of average post-domestication population growth suggest an increase in rates of population growth (calculated as compound interest rates). But on average, the increase can have been no more than from about .003 percent per year for pre-Neolithic hunter-gatherers to about 0.1 percent for Neolithic and post-Neolithic farmers. (In both cases the averages are simple mathematical calculations of what is possible based on all reasonable estimates of world population at the period of adoption of agricultural (about 5-25 million) to estimated population in 1500 C.E. (about five hundred million). Average population growth even after the onset of agriculture would therefore have been trivial to the point where it would have been almost imperceptible to the populations involved. It would have taken such populations about one thousand years to double in size. Growth and dispersal of agricultural populations and/or diffusion of domestic crops were hardly likely to have been exuberant in most locations for that reason, particularly if arguments about declining health and very low average growth rates are considered. Owing to their low rank as resources, crops would presumably have diffused only to populations facing similar levels of demand or pressure but lacking good local domesticates of their own.
On the other hand, population growth might have been comparatively quite rapid in some areas because of increased fertility and improved life expectancy. Exuberant growth in some areas, such as Europe, must have been balanced by the decline of other populations, including those of other farmers. Exuberant growth, or diffusion, perhaps based on the relative quality of some cereals such as wheat and barley among otherwise low-ranking, intensively exploited resources, is observable in areas (such as the expansion of the Middle Eastern farming complex and probable expansion of agriculture populations into Europe). But even there, in contrast to old models assuming population expansion of hunter-gatherer “bands” into areas of very low population density, expansion would, based on observed intensity of exploitation, have been expanding into areas occupied by hunter-gatherers, who would by this time have had population densities and social complexity almost equal to their own. The preexisting size and structure of groups of hunter-gatherers in areas of agricultural spread suggests that diffusion may have played a bigger role in the process than was once assumed.
Since health and nutrition seem to have declined, the primary advantage to farmers seems to have been both political and military because of the ability to concentrate population and raise larger armies. This would have conferred a considerable advantage in power at a time when few if any weapons were available that were capable of offsetting numerical superiority.