Milk and Dairy Products

Keith Vernon. Cambridge World History of Food. Editor: Kenneth F Kiple & Kriemhild Conee Ornelas. Volume 1. Cambridge, UK: Cambridge University Press, 2000.

Purity and Danger

Milk occupies a curiously ambiguous place in the history and culture of food. It has been pointed to as an archetypal, almost elementally nourishing food, supremely healthful, reflecting the nurturing relationship of mother and infant. In recent times, its whiteness has come to stand as a symbol of natural goodness and purity. But milk also conceals danger. Its nutritional largesse is equally appealing to hosts of putrefying bacteria, and unless milk is consumed almost immediately, it rapidly deteriorates into a decidedly unwholesome mass. Even in the apparently safe period between lactation and curdling, pathogenic organisms may lurk and multiply with potentially more devastating consequences for a new infant than the more immediately apparent problems of an obviously bad food.

The very processes of corruption, however, also provided the ways by which milk became a more widespread and acceptable food. Some contaminating organisms transform milk toward simple forms of butter, cheese, or yoghurt, and it is in these forms, not as a beverage, that milk has been consumed throughout the greater part of the history of human eating. As a highly ephemeral food then, unless milk is transmitted directly between provider (whether human or animal) and consumer, it is fraught with danger. Preservation has thus been the overriding factor in milk’s development as an important food for humans.

Initially, preservation was achieved through manufacture into butter or cheese. Later, briefly, fresh milk was kept safe only by cleanliness of production and speed of transport; in the twentieth century, however, milk has been preserved primarily by means of heat treatment, particularly pasteurization. This preservation of milk, particularly on an industrial scale since the late nineteenth century, highlights another contradictory tension in the nature of its consumption. Milk production is a quintessentially female process, and the resonances of the mothering bond imparted a crucially feminine nature to the whole area of dairying in preindustrial times. Even though milk could only become an important foodstuff by transforming the female milk into a different, harder manufactured solidity, commercial dairies and domestic output remained spheres of women’s activity. The femininity of dairy production, however, could not withstand industrial dairying methods, and from the end of the nineteenth century, men began to take over dairying as it became concentrated in bigger, more technically sophisticated industrial plants.

There is a further fundamental dichotomy in the culture and history of milk (one almost unique among foodstuffs) in the very ability of people to digest it. Some people do not have the enzyme—lactase—required to digest lactose, the principal sugar in milk, and the pain and discomfort arising from the inability to absorb milk sugars are reflected in cultural traditions that render even the thought of imbibing milk repulsive. Lactase deficiency affects most of the world’s peoples, many of whom regard milk drinking as revolting. Thus, milk has been geographically limited, even though it is universally available. It has been consumed in parts of Asia and Africa, but its consumption has been most significant in Europe and in areas of European colonization, such as Australasia and North America.

These tensions and contradictions in the very nature of milk—a pristine, white, and nutritious beverage, which also harbors and conceals corruption; a whole and complete food which, in principle, demands no preparation but which, in practice for most of its history, has required transformation into something else, and which vast numbers of people literally cannot stomach—are reflected in the history of milk production and consumption. In preindustrial societies, dairy produce had a part to play in peoples’ diets, although it was probably not particularly important in terms of overall nutrition. Yet, milk had a cultural resonance, beyond its dietary value, that saw it enshrined in several religious traditions as a signifier of promise and plenty, or as a symbolic link between spiritual and earthly nourishment. From the early eighteenth-century commercialization of agriculture, dairying took on greater economic importance, particularly in provisioning urban centers.

In the nineteenth century, increasing urbanization and the development of rapid transportation and communication systems, particularly railways, began for the first time to create a market for liquid milk on a large scale. Dairying and the production of liquid milk acquired nationalistic significance around the beginning of the twentieth century as a result of growing concern in Western countries about the health of infants. Yet although milk had come to be vitally important, the uncertain hygiene associated with it highlighted the necessity of clean production and supply. The original dichotomies regulating the consumption of milk remained, however, because the dissemination of liquid milk required enormous mechanical intervention, and its natural purity could be maintained only by industrial processing.

Throughout the twentieth century, as milk eventually became a readily available, clean, and safe commodity, its consumption as a beverage reached a cultural epitome: Its whiteness evoked an image of goodness—a signifier of health and hygiene. Its ascendancy was short-lived, however; in the overfed northern hemisphere, the same nutritional value that had made it so important in the previous decades consigned it to the ranks of dietary sins. At the same time, inappropriate developing-world feeding schemes (based on dried milk) and the recognition of lactase deficiency have undermined the notion of universal goodness and revealed the dangers of a nutritional imperialism.

Milk in Preindustrial Societies

To drink milk is an inherently natural impulse; an infant placed at the breast knows instinctively what to do.As suitable animals (such as mares, asses, ewes, goats, and various kinds of cows) were domesticated, it appears that drinking animal milk became an acceptable practice. Presumably, folk taxonomic associations between humans and other mammals would have indicated that milk was an animal secretion that could be consumed as food, and there are numerous records and legends of infants suckling directly from animals, as well as from a range of artificial devices (Fildes 1986).

As with most other early foods, the origins of dairy products are unclear. Certain natural fermentations give rise to yoghurt or soft, cheeselike substances that are sufficiently different from putrid milk to have encouraged their sampling. There are stories that cheese originated among Near Eastern nomads who may have stored milk in the stomach bags of cows, a common form of container, in which the natural rennet would have given rise to a sort of cheese. Similarly, one might imagine a horseman setting out with a bag of milk, only to find, when he came to drink it, that the agitation of riding had curdled it into a not unpleasant mixture of curds and buttermilk (Tannahill 1973). Whatever the origins, dairy products have had a place in human diets from very early times.

Equally early on in human history, however, a fundamental split occurred in the history and culture of dairy consumption because, as already noted, drinking more than very small quantities of milk causes the majority of the world’s adult population to suffer digestive problems. It has been estimated that although more than 96 percent of northern European peoples are able to digest milk, some 50 to 75 percent of Africans, Indians, Near Eastern Asians, eastern Europeans and virtually all Asian and Native American peoples cannot digest it. Their bodies stop producing lactase—the enzyme that breaks down the milk sugar, lactose—soon after weaning (Tannahill 1973). It has been suggested that an adult ability to break down lactose spread as people moved northward into colder climates. Covering themselves with more clothes, these people experienced shortages of vitamin D previously derived from the action of sunlight. Vitamin D production in the skin, however, is enhanced with a higher intake of calcium, of which milk is a particularly rich source. There would, thus, have been a selective advantage for those people who retained the capacity to digest milk, and over time, the proportion of the population in northern climates with the ability to digest lactose would have increased (Harris 1986).

Such a starkly biological account, however, does not explain the powerful rejection of milk among many Asian peoples and needs to be supplemented with cultural factors. One suggestion is that Chinese agricultural practices worked to exclude milk from the culture; because planting took place year-round, primarily with human labor in areas of high population density, there were few draft animals that could have provided milk. The main flesh animal was the pig, which was impossible to milk; thus, after weaning, Chinese toddlers were not exposed to milk from other sources. By contrast, in subcontinental Asia, draft animals were more prevalent—indeed, were necessary to prepare land for the shorter planting season dictated by a monsoon climate. Because of greater availability of milk, cultural aversion to it did not develop, and dairy products remained an important feature of the diet (Harris 1986). An alternative hypothesis, and one, apparently, traditionally held by Chinese people, is that the aversion to milk arose from the desire to distinguish themselves from the nomads on the northern borders of China, who drank fermented mare’s milk (Chang 1977). It does seem that milk products had become a feature of the diet of the northern Chinese aristocracy between the Han and Sung periods, but it appears that, after the ensuing period of Mongolian rule, milk definitely acquired a barbarian reputation (Chang 1977).

In most of the preindustrial world, however, milk (or, more usually, dairy products) had a place in the diet of all those who had domesticated animals and could absorb lactose. Among nomadic pastoralists of the Near East, sheep and goats provided the main source of milk and cheese. On the great open grasslands of central Asia, and extending into eastern Europe, mare’s milk, often fermented into the alcoholic liquor kumiss, was consumed in large quantities (Tannahill 1973). On subcontinental Asia, the buffalo was the principal source of milk to be made primarily into ghee—a reduced butter in which most of the moisture is removed by heating—for cooking or ceremonial purposes (Mahias 1988). Pastoralists of Africa, with vast herds of cattle, used a considerable amount of milk in their diets (Pyke 1968). Milk and cheese are also mentioned as foods, medicines, and beauty aids in records of the ancient civilizations of Egypt, Greece, and Rome (Warner 1976).

In Europe, dairy products were a notable part of the peasants’ diet wherever they were available. Across the continent, the great majority of people lived on the verge of starvation, particularly during the late winter months and the periodic harvest failures. The basic diet of cereal-based porridge or soup was rarely supplemented with actual meat, and the more prevalent dairy products (or “white meats,” as they were called) were welcome sources of animal fat and protein, whether as cheese, butter, beverage, or additions to soup (Mennell 1985). In the early months of the year, however, cheese or butter kept throughout the winter was mostly used up, and cattle not slaughtered the previous autumn struggled to survive until the new grass appeared (Smith and Christian 1984).Thus, though an important source of animal fat and proteins, dairy products (like meat) cannot have been much more than a flavoring for the basically cereal diet of the peasants.

Among the nobility in the late medieval period there was a growing disdain for dairy produce. With upper-class access to often vast quantities of meat, the “white meats” of the peasantry were increasingly scorned, probably especially by the aspiring merchants of the towns. Nonetheless, dairy products did not disappear from the tables of the well-to-do, and in the late sixteenth and early seventeenth centuries, butter became more important in noble larders, although used primarily for cooking and making the elaborate sauces of the new haute cuisine then developing in France and Italy (Mennell 1985).

Perhaps because milk was recognized as a particularly rich and nourishing food, as well as something of a luxury to have in more than small quantities, but also (and one suspects primarily) because of the symbolic importance of the nurturing bond between mother and child, milk has achieved some prominence in myths and religious systems. In the Old Testament, the promised land was one of “milk and honey.”The image of a mother goddess suckling her child is a common representation of Earth bringing forth sustenance. The Egyptian mother goddess, Isis, suckled her divine son Horus, while the Greek god Zeus was nurtured by Amalthia (variously depicted as a mortal woman or as a nanny goat).The symbolism of the mother with her infant god passed into Christian representations of the Madonna and Child, which perpetuated the divine linkages between the spiritual and earthly worlds, mediated by the physical nurturing of milk (Warner 1976). Milk and dairy products have a vital role in sacrificial and purifying rituals in Indian religious myths, especially as the life-giving force of the fire god Agni (Mahias 1988).

The place of dairy products in the diet of peoples in nonindustrial societies has remained virtually unchanged over time; they can be an important source of animal protein but remain a minor component of largely cereal or vegetable diets. In northern Europe, however, and particularly in England, the role of milk and dairy products began to change with the wider emergence of commercial agriculture, urbanization, and proto-industrialization, beginning about the end of the sixteenth century and continuing throughout the seventeenth century.

In seventeenth-century England, demand for dairy products increased with the emergence of rural industry and the location of an increasingly higher proportion of the population in towns (Drummond and Wilbraham 1939). Cheese was of growing importance in the diets of urban laborers, particularly in London. It was cheap, nutritious, and also convenient; it could be stored for reasonable lengths of time and could be carried to places of work and eaten easily there. Large quantities of cheese were also required by the navy and by new, large institutions, such as hospitals and workhouses (Fussell 1926-9; Fussell and Goodman 1934-7). This demand was largely met by the commercialized agriculture that had been developing on the larger estates carved out after the Reformation. The numbers of milch cows multiplied, and dairies became necessary and integral parts of English country houses. Dairying was practiced by anyone who had access to a cow, which replaced the sheep as the primary milk provider (Tannahill 1973), and, for the respectable poor in rural areas, could provide an important source of income. The cheese trade also relied on a reasonably efficient transportation system, and London was served by sea as well as by road (Fussell 1966).

Throughout the early phases of expanding commercialization, dairying remained a female domain characterized by an arcane knowledge (Valenze 1991). Although from southern Africa to northern Europe the notion persisted that a woman who handled milk during menstruation might curdle it, the mystery of dairying lay in the special competence of dairymaids (Fussell 1966; Pyke 1968). The dairy in a country house was invariably attached to the kitchen, supervised by the farmer’s wife (or the female head of the household in larger concerns), and the production of cheese and butter was a female operation. From the seventeenth century, books of household management included dairying as a routine aspect of the mistress’s responsibilities. Hygiene in the dairy was constantly stressed, with detailed instructions provided as to the proper construction and maintenance of the dairy and the duties of the women working in it (Fussell 1966).

By the end of the eighteenth century, wherever dairying continued to be carried out in a preindustrial, subsistence agricultural context, milk products retained their customary position as a minor adjunct to the diet, varying only by degrees among places where dairying was more or less pronounced. But in the industrializing and urbanizing world, large-scale commercial agriculture was beginning to alter the nature of production and consumption of dairy produce and bring about a crucial transformation in the historical culture of milk.

Milk in an Urbanizing World

The tripartite revolution of industrialization, urbanization, and the commercialization and increasing productivity of agriculture had dramatic consequences for food production and consumption. Enough food was produced to fuel enormous population growth, with increasing proportions of that population ceasing to be agriculturally productive. Urban dwellers, as net food consumers, depended on food being brought in from producing areas. Throughout the nineteenth century, this took place not only on a national but an international scale, until most parts of the world became integrated into a global network of food trade, principally geared toward feeding Europe. Thus, surplus dairy producers sent huge quantities of cheese and butter, increasingly manufactured in factories, to northern Europe.

At the same time, there was a growing demand for liquid milk in urban areas, and nearby dairy producers began to concentrate on its supply, partially to offset the competition in manufactured products. Yet, accompanying this expansion of production was an increasing consumer concern about the quality of the milk supply, particularly toward the end of the nineteenth century, when milk was implicated in questions of infant mortality. In the elaboration of a range of measures to control problems of milk adulteration and unhygienic procedures, dairying was completely transformed into a highly mechanized manufacturing and distribution activity, which steadily undermined its traditionally feminine nature. Such a pattern recurred throughout the nineteenth-century developing world—commercial, industrial dairying geared toward a manufactured dairy market, paralleled by the development of liquid milk production in urban areas, which, in turn, gave rise to concern about milk quality.

During the eighteenth century, northern European agriculture became much more productive, with novel crop rotation techniques, new machinery, better land management, and the more intensive methods achieved with enclosure. Dairying benefited from greater attention to fodder crops, which allowed cows to be fed more adequately throughout the winter. Yields still fell during winter months, but more cows survived, and yields quickly recovered with the spring grass (Fussell 1966). Thus, the potential arose for year-round dairying.

By the second half of the eighteenth century, increasing food production was supplying not only a steadily growing rural population but also, in Britain, an explosive increase in urban populations. During the first half of the nineteenth century, the sources of Britain-s wealth gradually shifted from agriculture and the landed estates toward manufacturing and industry. This was symbolized by the repeal in 1846 of the Corn Laws, which saw landed and agricultural interests supplanted by the demands of industry for free trade. Yet, as Britain ceased to be agriculturally selfsufficient, free traders had to cater not only to the requirements of industry but also to the need for imported foodstuffs to feed urban industrial workers (Burnett 1989).

Across the globe, producers of agricultural surplus geared up to meet this need (Offer 1989). Australia and New Zealand sent butter and meat to British markets; Ireland and Denmark developed dairying to meet British demand. Danish farmers, recognizing the likely requirements of their rapidly urbanizing neighbor across the North Sea, organized in cooperatives to develop highly efficient methods for producing butter and bacon. In the process, Denmark’s national economy became modernized and export driven, yet still based on agriculture (Murray 1977; Keillor 1993).

Ireland had been characterized by very high dairy production and consumption patterns from medieval times, although by the eighteenth century, the continued reliance on dairy foods may be seen as a mark of poverty. As was the case in Denmark, nineteenth-century Irish dairying became more commercialized to supply Ireland’s urbanized neighbor, though at the expense of an important source of animal food for its own consumption (O’Grada 1977; Cullen 1992).

In the United States, between the end of the eighteenth century and the middle of the nineteenth, westward expansion had brought new, high-yielding agricultural lands into production. By midcentury, cheese was being shipped down the Erie Canal from upstate New York. But across the vast open spaces of North America, rail transport was the key to the development of dairying. Also crucial was the hand cream separator (Cochrane 1979), which allowed even small farmers to cream off the butterfat that railroads could then take to regional factories to be made into cheese and butter. Industrial methods for manufacturing these products spread across the northeastern and north central states (Lampard 1963; Cochrane 1979). By the second half of the nineteenth century, production and transportation were such that cheese manufactured in the United States or Canada, and butter made in New Zealand, could reach British markets at lower prices than those required by British farmers (Burnett 1989). In addition, an outbreak of rinderpest in England during the 1860s wiped out most of the urban cows, which helped North American dairy products gain a prominent place in British markets (Fussell 1966).

As in the United States, railways in Great Britain were fundamental in enhancing the importance of liquid milk as a beverage. From the 1840s, liquid milk could be brought from the countryside to the towns and sold before it became sour. The perishability of milk had always restricted its scope as a drink. But speed, together with the development of coolers and refrigerated railway cars, increased its viability, and in all areas within reach of an urban center, dairying was increasingly concerned with liquid milk supply. By the second half of the nineteenth century, milk was being carried to London from as far away as Derbyshire, and Lancashire and Cheshire emerged as major dairy counties to supply the conurbations of Liverpool, Manchester, and Stoke (Whetham 1964; Taylor 1974, 1976, 1987). Effectively, the whole of Britain could now be regarded as an urban area, which had an enormous demand for milk.

In North America, the manufacturing of dairy products was concentrated in the lake states of Wisconsin, Minnesota, and Illinois, whereas liquid milk to supply major urban areas was produced in the northeastern and Atlantic seaboard states (Lampard 1963; Cochrane 1979). Thus, better communications and large-scale manufacturing prompted a functional division in dairying. Although manufactured goods, such as cheese, could be carried long distances to urban markets more cheaply than those of a small, local producer, dairying in urban areas enjoyed a protected market for liquid milk that was not susceptible to more distant or foreign competition.

In England, toward the end of the nineteenth century, liquid milk came to be seen almost as an agricultural panacea, and not just for hard-pressed dairy farmers in urban areas. During the great depression of English farming from the mid-1870s to the 1890s, when the full economic impact of imported food began to be felt, farmers increasingly turned to the naturally protected production of liquid milk. More acres reverted to pasture, which was condemned by commentators bemoaning the apparent decline of cereal-based farming, but liquid milk production provided farmers with a welcome respite from foreign competition and helped alleviate the agricultural slump, furnishing a regular cash income throughout the year, helping to clear debts, and bringing some semblance of profitability (Taylor 1974, 1987).

Although few other nations relied so heavily on imports as Britain, by the end of the century, farmers in many countries were feeling the effects of competition in the international food market. In the United States, grain farmers of the mid-nineteenth century found themselves threatened by even higher-yielding lands opening up in the west and sought to diversify, notably into dairying. Delegates were sent from the north central states to study Danish methods, and many Danes forged new careers in the United States, ultimately to the consternation of the Danish government, which feared rivalry for the British market (Keillor 1993). State agricultural colleges and the new experiment stations also sought to improve upon the (popularly perceived) low standards of American dairying. Scientific “book farming,” popular with progressives, found an ideal outlet in dairying, with rational and efficient farming methods based on calculated feeding plans, milk recording schemes, and analyses of butterfat content (Johnson 1971; Peagram 1991). Dairying was becoming self-consciously modern.

Some of the grain states, where there has been an overreliance on single crops, saw dairying as a useful means of diversifying. The North Dakota agricultural experiment station, for example, tried in the early twentieth century to offset the state’s reliance on spring wheat by promoting dairying. Although for a while North Dakota did become a notable butter producer, it seems that cereal farmers did not adjust happily to the rather different demands of dairy farming (Danbom 1989).

Ultimately, the general pattern of dairying common to urbanizing countries was established in the United States. Liquid milk was the mainstay of farmers with access to urban markets, and the manufacture of other dairy products was concentrated in the north central states (Michigan, Minnesota, and Wisconsin) on the principal rail routes to the East (Haystead and File 1955; Lampard 1963; Cochrane 1979).

The growing industrialization of dairying and the manufacture of dairy products steadily eroded the femininity of dairying. Butter production, especially, had remained an essentially female activity throughout most of the nineteenth century, occupying an important role not only in the domestic economy but also in the wider social relations and status of women in farming communities. Household manufacture of milk products, however, was increasingly replaced by the transportation of milk to railways and factories and by industrial production (all male spheres of activity), and a marked decline occurred in the number of women involved in dairying. In a typically paradoxical process, the more natural, feminine state of liquid milk gained greater prominence, but only through the intervention of mechanical artifacts operated by men. As dairying left the household, an important element of rural female employment, skill, and authority went with it (Cohen 1984; Osternd 1988; Nurnally 1989; Bourke 1990; Valenze 1991).

Also embattled were milk consumers, increasingly concentrated in urban centers, subject to the vagaries of transport systems for their food supply, and suffering the appalling conditions of massive urban expansion. Many people living in towns simply did not have enough to eat; what was available was of indifferent to abominable quality and, frequently, heavily adulterated. Milk and dairy products were part of the diet, but their supply was highly uncertain. The principal sources were urban milk shops, with cowsheds that provoked bitter condemnation from health reformers. Such places were notorious for squalid conditions, with the cows fed on slops and refuse (frequently the spent grain from distilleries), and disease rife among them (Okun 1986; Burnett 1989). Moreover, milk, whether from urban shops or from roundsmen coming in from the country, was routinely adulterated in the mid-nineteenth century (Atkins 1992).Arthur Hill Hassall, investigating for the Lancet in the early 1850s, found that milk was diluted with from 10 to 50 percent water, with that water often from polluted wells and springs (Burnett 1989).

Throughout the second half of the nineteenth century, urban diets in Britain improved noticeably, both in quantity and quality. This was primarily a result of the rise in real wages as general economic prosperity began to percolate down the social strata, but it was also because of an increasing availability of cheap imported food. Food had always been the principal item of expenditure for urban workers, and so any extra real income was invariably spent on food first, both to extend the variety of the diet and to obtain greater quantities. Thus, from the 1890s, more meat, eggs, dairy produce, and fresh vegetables appeared on the tables of the urban working classes (Oddy and Miller 1976, 1985).

The quality of food was also addressed in a more effective fashion. During decades of revelations by such individuals as Frederick Accum about the extent of food adulteration, a series of food purity laws were introduced in Britain. These were initially rather ineffective, but the 1875 Sale of Food and Drugs Act, and the Public Health Act of the same year, began to bring about real improvements in the basic quality of foods (Burnett 1989). This story was a familiar one in most major urban nations during the second half of the nineteenth century. In the United States, for example, reformers (also stimulated by Accum) launched investigations and turned up their own evidence of food adulteration. Effective legislation was introduced by cities and states by the end of the century, culminating in the 1906 Pure Food and Drugs Act (Okun 1986).

Urban populations probably benefited from the volumes of liquid milk brought in by the railways, but fresh milk still became less than fresh very quickly, and although gross adulterants were being eliminated, the problem of the keeping properties of milk remained. Dairy-producing countries had sought means of preserving milk throughout the nineteenth century. During the 1850s, developments in condensing techniques, especially by the American Gail Borden, brought about the formation, in 1865, of the Anglo-Swiss Condensed Milk Company, which soon had factories across Europe. Tinned milk consumption rose rapidly after 1870, and in the early twentieth century, several methods for making dried milk encouraged the emergence of a powdered-milk industry. In New Zealand, a dairy exporter developed the new Just-Hatmaker process of drying milk and this, in turn, gave birth to the giant corporation Glaxo (Davenport-Hines and Slinn 1992). In England, the pharmaceutical company Allen and Hanbury’s used a method of oven-drying evaporated milk (Tweedale 1990). Condensed and powdered milk were both popular, as they kept longer than fresh milk; moreover, tinned milk could be diluted more or less heavily according to the vicissitudes of the family economy.

By the beginning of the twentieth century, adult diets had improved (although women remained less fed well into the new century), and the worst excesses of urban squalor, poverty, and deprivation were being addressed. These improvements of diet, sanitation, and housing were reflected in the general falling of mortality rates (McKeown 1969), but infant mortality rates remained stubbornly high. The problem was particularly noticeable in France (which also had a declining birth rate) after 1870 and in Britain during the 1890s.Yet concern for high infant mortality was also expressed in Canada, the United States, Australia, New Zealand, and the Netherlands. The underlying issue was the same: With an increasingly tense international situation brought on by the formation of rival power blocks in Europe and the imperial maneuverings by the United States, greater significance was accorded to the health of a nation’s people. Populations began to be seen as national assets, and both their sizes and quality were viewed as of national political, economic, and military importance. Attention was focused on the health of urban populations and, particularly, the health of infants who would be the soldiers, workers, and mothers of the future.

Beginning in France in the 1890s, governments, charities, and local authorities campaigned to promote breast feeding, primarily, but also to provide subsidized milk to mothers with new children (Fildes, Marks, and Marland 1992).There were also increasing demands to improve further the hygiene of the dairy industry and milk supply. Bacteriologists had been studying the flora of milk to investigate its keeping properties and the transmission of infectious disease from animals to humans, with special attention to tuberculosis (TB) and the question of whether bovine TB could be passed to infants in milk. In Britain, the problem was thoroughly examined in a series of royal commissions on TB that effectively created a semipermanent body of bacteriologists investigating questions of milk and meat hygiene. In 1914, the Milk and Dairies Act was passed to prevent the sale of milk from tuberculous cows, while giving local authorities considerable powers of inspection (Bryder 1988; Smith 1988; Atkins 1992).

Similar measures were pursued in many other countries. In Australia, milk institutes were established in Brisbane and Melbourne in 1908 to ensure clean milk supplies (Mein Smith 1992). The same year saw the Canadian Medical Association devise a system of standards for milk hygiene in the dairy industry, which was implemented by local authorities. Toronto and Hamilton established pure milk depots before World War I (Comacchio 1992). Free or subsidized milk was available in the Netherlands from Gouttes de lait—modeled closely on the pioneering French milk depots (Marland 1992). In American cities, the emphasis was more on ensuring the quality of milk supplies than on providing subsidized milk, and schemes for regulating standards were devised by many cities, including New York, Philadelphia, and Memphis, and the state of Illinois, with particular emphasis on eradicating TB and promoting the pasteurization of milk (Helper 1986; Shoemaker 1986; Meckel 1990; Peagram 1991).

Such attention given to infant welfare was incorporated into the wider schemes of social welfare developing in the early twentieth century (Lewis 1993). Enthusiasm for the provision of free milk seems to have been relatively short-lived and was replaced with an increasing emphasis on the education of mothers on matters of child care and good housekeeping. But the public-health principles underlying campaigns for clean milk persisted, and the ever-increasing volumes of liquid milk coming into the towns were carefully scrutinized.

Milk in the Twentieth Century

The twentieth century was marked by a massive expansion in the production and consumption of dairy commodities, particularly liquid milk. Rationalized, scientific dairy farming, based on calculated feeding, careful milk recording, and artificial insemination, has produced cows that are virtually milk machines. The production, manufacturing, and distribution of dairy products has become ever more concentrated, yet the continual growth of dairying has required equally prodigious efforts to find markets for the produce.

Around midcentury, milk was viewed as one of humankind’s most important foods. Advertisers and scientists alike had convinced the public that milk was a supremely healthy, nourishing, even life-giving substance. Its almost mystical whiteness was, for perhaps the only time in its history, matched by its hygienic purity and popular appeal. Toward the end of the twentieth century, however, health problems connected with milk drinking were uncovered, and inappropriate Western marketing schemes involving milk were exposed as highly detrimental to infants’ health in developing countries.

In the early twentieth century, however, dairying appeared to have a glorious future and, with ever-expanding urban populations, more and more farmers turned to liquid milk production. For particularly hard-pressed British farmers, dairying had become the cornerstone of agriculture, outstripping the economic return from cereals (Taylor 1974). In the United States, too, dairying was a fallback for farmers on relatively low-yielding land and a common means of diversifying a narrowly based state agriculture (Cochrane 1979; Danbom 1989). Such a recourse to dairying, however, carried with it the danger of overproduction. Several strategies were pursued in an effort to deal with dairy surpluses, but by far the favored means was to promote demand. This had expanded on its own in prosperous times as increasing numbers of people with surplus income were able to extend the scope of their diets to include more fresh meat, vegetables, and dairy produce. Beginning in the 1920s and 1930s, a concerted effort was made to encourage consumption of dairy goods and, especially in Britain, the drinking of milk. Such an effort, however, required that the commodity be safe.

Experiments with various types of heat treatment had been conducted to extend the keeping properties of milk (Dwork 1987a). The principle had been established by Louis Pasteur in his studies of beer and wine, and bacteriologists and inventors were not slow to follow up on it. Well into the twentieth century, however, there was significant resistance to the heat treatment of milk (Pyke 1968). It was commonly believed that heating destroyed most of milk’s essentially nutritious and health-giving properties, even a vitalist life force (McKee, personal communication).At the same time, there was a school of opinion within the dairy industry that a certain level of natural contamination was necessary to make milk into cheese or butter, and that pasteurized milk was unsuitable for dairy manufacture (Davis 1983). Although the arguments for pasteurization were accepted more readily in the United States than in Britain, debate continued into the 1930s (Meckel 1990). Opponents argued that heat treatment was a technical fix for sloppy procedures and an excuse to produce unclean milk (Davis 1983). Ultimately, however, an increasingly concentrated dairy industry cut through the controversy by routinely pasteurizing all milk it received, which was becoming a necessity because of the large volumes handled and the long distances now covered by milk distributors (Whetham 1976).

A considerable amount of research on nutrition and health was done during the interwar years, particularly in depressed areas of Britain. Experiments by Corry Mann and Boyd Orr on school children showed that groups given milk for a certain period put on more weight, had higher hemoglobin blood counts, and, according to their teachers, demonstrated a greater attentiveness at school; their parents commented that they were also livelier at home (Burnett 1989). In the controversial debates about diet in depressed Britain, milk was an important factor. It was thought that although welfare could not provide a sufficient quantity of food to maintain good health, this goal might be achieved with food supplements, such as milk, needed in fairly small quantities (Webster 1982).

By the 1930s, a culture of dairy consumption was well established in the major dairy manufacturing countries. On the Continent, this primarily involved manufactured butter and, to a lesser extent, cheese; in the United States and in Scandinavia, where liquid milk consumption had a high profile, there was a sound base upon which marketing campaigns could expand demand (Teuteberg 1992). In Britain, although dairying was a major sector of agriculture (and one in which liquid milk was particularly prevalent), people neither ate as much butter as continental Europeans nor drank as much milk as Americans and Scandinavians. Milk was chiefly something to put in tea or give to children, old people, or invalids; indeed, throughout Europe, liquid milk was associated with sickliness or effeminacy (McKee, personal communication).

Thus, one task of the Milk Marketing Boards for England, Scotland, and Northern Ireland, set up after the act of 1933, was to overcome prejudices against milk and stimulate demand for the products of an industry now suffering the consequences of overproduction from the decreasing prices that manufacturers were paying for milk. The Milk Marketing Boards (MMBs), as producer organizations, tried to regulate the dairy market like benign monopolists. Farmers were paid a fixed price for their milk, which was in turn sold to dairy product manufacturers at one price and to liquid milk distributors at a higher one, thus evening out the differences in income that depended on farmers’ access (or nonaccess) to a liquid market. As the MMBs undertook to buy all the milk produced by dairy farmers, they had to find markets for it. From the start, a tariff was levied on each gallon to pay for advertising, and a massive campaign was launched, following American models. It stressed the value of milk for health, strength, and beauty, and featured sportsmen and women, manual laborers, and film stars (Jenkins 1970; Whetstone 1970; Baker 1973).

The emphasis on naturalness, whiteness, and purity reflected the contemporary concern for light, space, and healthy living. Milk was marketed to youth through futuristic, American-style milk bars—all chrome, steel, and modernity. The marketing of ice cream was another area in which the British copied from America and the Continent in trying to improve sales (McKee, personal communication). In 1934, drawing on recent discoveries of the vitamin content of milk, the company Cadbury launched a new advertising campaign for its dairy milk chocolate, which was marketed as a food as well as a treat (Othick 1976; Horrocks 1993).

Throughout the 1930s, the virtually unique retail system of doorstep deliveries was established in Britain. Derived from the roundsmen of the eighteenth century who pushed churn-carrying carts through the streets, milk distributors between the wars developed the service of daily deliveries of pint bottles of pasteurized milk to the doorstep (Baker 1973). The MMBs promoted the system and, until recently, the image of the milkman doing his rounds and the bottle on the doorstep were central to the culture of milk in Britain, elevating the milkman to the status of folk hero or comic character. Only with the 1980s or so was there a notable decline in the importance of milkmen, as the greater prevalence of automobiles, refrigerators, and supermarkets have resulted in British consumers joining Americans and other Europeans in buying cartons of milk from stores (Jenkins 1970).

Milk remained important for children during the 1930s, with the Milk Act of 1934 providing subsidized milk for British schoolchildren. The act also continued the practice of baby clinics begun at the turn of the century, which encouraged the habit of drinking milk at an early age (Hurt 1985). Milk for pregnant women and new mothers was also a priority, and the welfare state distributed nourishment for mothers, as well as infants and children, until the program was discontinued in 1972 by the British parliament. The efforts of the MMBs were significant—so much so that as the 1930s came to a close, British people were drinking, on average, a pint of milk more per week than they had in the mid-1920s. Moreover, such a trend continued through the 1950s, when British per capita milk consumption exceeded that of the United States; only Sweden and Ireland had higher levels (Jenkins 1970).

In the last decades of the twentieth century, dairying and dairy products continued to hold an ambiguous status. The market for milk in the major dairy areas of the developed world seemed to be saturated and was even showing signs of diminishing. Consumption peaked sometime during the 1960s and then, depending on place, stabilized or began to shrink (OECD 1976). Nonetheless, dairying has continued to be a significant factor in industrial agriculture throughout the last half of the twentieth century. In Europe, the world’s principal dairy-producing region, milk products accounted for between 11 and 35 percent of total farm sales (Butterwick and Rolfe 1968). Such regular cash income is a lifeline for farmers, but the vast sums paid out in subsidies have helped to swell the milk “lakes” and butter “mountains” of surplus production to the absurd point that butter has been used as a feed for the cows that produced the milk to begin with. Since the interwar period, dairy farming has been maintained only by subsidies and tariffs, with overproduction the unhappy outcome (Johnson 1973; Cannon 1987). Developing countries seem to be repeating the experience of the developed West, with liquid milk sectors emerging to supply urban areas (Kurien 1970; Chang 1977; Mahias 1988).

Although dairy farming has been sustained at considerable economic cost, other, more insidious problems have emerged for milk products. The high food value, which made it so important for infants and children until the mid-twentieth century, lead to more or less serious health problems in the overfed late twentieth century. The high-fat content of milk has been implicated in the modern Western illnesses of obesity, coronary artery disease, and a host of digestive and allergic conditions. As a consequence, there has been a marked trend toward skimmed milk and milk powders, and even toward the exclusion of milk from the diet. Similarly, butter has been a victim of concerns about heart disease and body weight and has been steadily replaced by an increasingly more palatable margarine during the twentieth century (Cannon 1987).

To combat health worries, marketers of milk and milk products have focused on their energy-giving properties and high calcium content. Although milk could not realistically be promoted as a slimming aid, as it had been in the 1930s, the industry was still aiming for a pint per person per day, and advertised milk as a provider of instant vitality (Jenkins 1970). Some dairy marketers in recent years have undermined the ground of critics by deliberately proclaiming the richness and luxury of specialized products, such as fresh cream or ice cream. In an ironic reversal of healthy eating advice, cream has been lauded as “naughty—but nice!” and recent advertisements have been aimed at subverting any associations of ice cream with notions of childhood or purity.

A rather more sinister turn of events, however, has also tarnished the pure-white reputation of milk. In the early 1970s, the issue of inappropriate marketing of canned milk and milk powders among developing-world peoples began to receive public attention. In Yemen, for example, a study showed that dried milk and canned milk, provided as part of general food aid, were being fed to babies. This occurred in a country where Western medicine and high-technology goods had high social cachet and where the natural capacities of women were commonly denigrated. The result of the aid was that women neglected to breast-feed their babies and turned instead to proprietary baby foods or artificial milk. Many of these women were unable to read the instructions and had no access to the necessary sterilizing equipment to keep bottles clean, let alone to an adequate supply of fresh, clean water, with the result being an increase in infant mortality. Although resolutions have been made by dairy manufacturers in accordance with World Health Organization (WHO) and United Nations International Children’s Emergency Fund (UNICEF) recommendations not to promote goods that might be used as substitutes for breast milk, contraventions continue (Mel-rose 1981).

Milk remains a singularly, almost uniquely, nutritious foodstuff, invested with elementally significant cultural and nutritional value. Yet milk has a Janus-faced nature—as it nourishes, so can it harm. Its whiteness evokes purity, but may also conceal corruption. Thus, as a commodity made widely available, milk has sometimes attacked the health it was intended to support and killed infants it was meant to nurture.