Thomas W Wilson & Clarence E Grim. Cambridge World History of Food. Editor: Kenneth F Kiple & Kriemhild Conee Ornelas. Volume 1. Cambridge, UK: Cambridge University Press, 2000.
Historically, dietary salt (sodium chloride) has been obtained by numerous methods, including solar evaporation of seawater, the boiling down of water from brine springs, and the mining of “rock” salt (Brisay and Evans 1975). In fact, R. P. Multhauf (1978) has pointed out that “salt-making” in history could be regarded as a quasi-agricultural occupation, as seen in frequent references to the annual production as a “harvest.” Such an occupation was seasonal, beginning with the advent of warm weather or the spring high tide and ceasing with the onset of autumnal rains. Multhauf has argued further that the quest for salt led to the development of major trade routes in the ancient world. The historian Herodotus, for example, described caravans heading for the salt oases of Libya, and great caravan routes also stretched across the Sahara, as salt from the desert was an important commodity exchanged for West African gold and slaves. Similarly huge salt deposits were mined in northern India before the time of Alexander the Great, and in the pre-Columbian Americas, the Maya and Aztecs traded salt that was employed in food, in medicines, and as an accessory in religious rituals. In China, evidence of salt mining dates from as early as 2000 B.C.
Homer termed salt “divine,” and Plato referred to it as “a substance dear to the gods.” Aristotle wrote that many regarded a brine or salt spring as a gift from the gods. In the Bible (Num. 18:19), it is written: “This is a perpetual covenant of salt before the Lord with you and your descendants also.” In the Orient, salt was regarded as the symbol of a bond between parties eating together. In Iran, “unfaithful to salt” referred to ungrateful or disloyal individuals. The English word “salary” is derived from salarium, the Latin word for salt, which was the pay of Roman soldiers. Moreover, Roman sausages were called salsus because so much salt was used to make them (Abrams 1983).
The preservative properties of salt have maintained the essentiality of the mineral throughout history. It helped meat last over long journeys, including those of marching armies and the migrations of peoples. Salt’s power to preserve meat led to the so-called invention of salted herring in fourteenth-century Europe, which has been called “a new era in the history of European salt production” (Multhauf 1978: 9). The technique of pickling preserves food by extracting water from animal tissues, making the dehydrated meat or fish resistant to bacterial attack (Bloch 1976).
During the eighteenth century, other industrial uses began to be found for salt. The invention in 1792 of a way to make sodium carbonate began the carbonated-water industry, and by 1850, 15 percent of the salt of France was going into soda. Since that time, nondietary uses of salt have far outweighed its employment for culinary purposes (Multhauf 1978).
Historically, governments appreciated the importance of salt and have taxed it since ancient times (Multhauf 1978). During the nineteenth century in the United States, a salt tax helped build the Erie Canal, and during the twentieth century in India, Mahatma Gandhi revolted against a salt tax, leading to the famous “March to the Sea.” Such has been the importance of salt that one historian has written: “Clearly, anyone who can control the salt supply of a community has powers of life and death. The control of water, being more ubiquitous than salt, is not so simple to put into effect” (Bloch 1976: 337).
Salt and Sodium: Essential to Life
Salt in the Body
In 1684, Robert Boyle became the first to demonstrate scientifically that the “salty taste” in blood, sweat, and tears was actually caused by the presence of salt. After he removed the organic matter from whole blood by ignition, a fixed salt remained, which he found to be virtually identical to marine salt. About a century later (1776), H. M. Rouelle showed that a large proportion of the inorganic materials in blood serum could be isolated in the form of cubic crystals of “sea salt.” Later still, in the nineteenth century, J. J. Berzelius and A. J. G. Marcet revealed that sodium chloride was the principal inorganic constituent of other body fluids (both those that occasionally collected in the abdominal cavity, around the lungs or heart, or in a cyst or a blister and those that permanently surrounded the brain and spinal cord) and was present in much the same concentration as in blood serum (Kaufman 1980). In the same century (1807), Sir Humphry Davy discovered both sodium and potassium by passing an electrical current through moist caustic potash or caustic soda. More recently, biomedical researchers have defined sodium as the principal cation of the circulating blood and tissue fluids of animals (Denton 1982).
Sodium is the sixth most common element on earth. Sodium chloride (what we commonly call salt) is the chemical combination of ions of sodium (Na+, molecular weight 23) and chlorine (Cl-, molecular weight 35.5)—the latter element, in its pure form, is a deadly greenish-yellow gas that reacts with water to form hydrochloric acid. Forty percent of the weight of common salt is made up of sodium; the remainder is chloride. Pure sodium is never found in nature. When freed from common table salt by electrolysis, sodium is a soft metal, lighter than water, and so reactive with oxygen in the air that it must be specially stored in air-free containers to prevent it from exploding. Sodium also reacts violently with water, as the two together form sodium hydroxide, in the process liberating hydrogen gas, which, in turn, bursts into flame from the heat of the reaction.
Nonetheless, even though a reactive element, sodium is essential to animal and human life. Indeed, life could be defined as the sum of the chemical processes that take place in the solution of salts between and within cells. In humans, the nutrients required to fuel life processes are first chewed and mixed with salt solutions produced by the salivary glands, then dissolved in salt and enzyme solutions from the stomach and pancreas, absorbed as salt solutions from the intestines, and delivered to the cells dissolved in a salt solution that ultimately depends on the ingestion of critical amounts of sodium and water. Excreted body fluids—blood, sweat, and tears—and feces are made up of these salts, and sodium salts are their key ingredient.
The Physical Need for Salt
In the nineteenth century, G. Bunge made the observation that carnivores never sought out salt, but herbivores did. His observation seemed to fit into common knowledge—hunters as well as husbandry-men knew that herbivores came to salt licks—but Bunge suggested something new: that salt was a necessity for life. In his travels he observed in numerous places that carnivores never ate salt but herbivores seemed to have a vital need for a supplement of it. He noted that herbivores excreted 3 to 4 times as much potassium as carnivores and theorized that the much higher potassium content of the vegetarian diet displaced sodium from body salts, causing an increase in the amount excreted in the urine. Therefore he reasoned that continuous consumption of a purely vegetarian diet with large amounts of potassium would make a large intake of sodium necessary for the maintenance of sodium balance (Bunge 1902).
Decades later, anthropologist Alfred Kroeber took issue with this notion of a biologically driven hunger for salt. He observed the Native Americans living along the Pacific coast of the United States and noted that salt was consumed in the south but not in the north. He saw no relationship among such factors as dietary salt use, the relative prevalence of seafood or meat in the diets, and the various climatic conditions, writing:”It must be concluded that whatever underlying urge there may be in physiology as influenced by diet and climate, the specific determinant of salt use or nonuse in most instances is social custom, in other words, culture” (Kroeber 1942: 2). H. Kaunitz found similar situations in areas of Australia, South Africa, and South America and suggested that salt craving might arise from “emotional” rather than innate needs (Kaunitz 1956).
J. Schulkin (1991) has recently noted that among psychologists in the 1930s, it was widely believed that “learning”—not biologically driven physical need—was responsible for the ingestion of minerals. However, C. P. Richter (following American physiologist Walter Cannon’s theory of The Wisdom of the Body ) took a minority view that “learning” might not be the primary driver, and in 1936, he provided the first experimental evidence for a salt appetite. He removed the adrenal glands from experimental rats, thus depriving them of the sodium-retaining hormone aldosterone—a situation that would prove fatal in the absence of dietary sodium—and the amount of sodium ingested by the adrenalectomized rats increased dramatically. In 1939, he hypothesized that the drive for sodium was innate, and in 1941, he “discovered” that hormonal signals generate sodium hunger. Moreover, in 1956, he showed that during reproduction periods, the ingestion of salt by female rats rose markedly (Schulkin 1991).
One of the key experimental laboratories in the study of sodium metabolism has been the Howard Florey Institute in Melbourne, Australia, where investigators, under the leadership of Derek Denton (1982), have conducted many experiments. In one of the most notable, researchers trained sheep to press a lever to get salt water to drink. The sheep were then depleted of salt by saliva drainage. When finally given access to the lever for salt water, the sheep within 30 minutes consumed the precise amount of sodium that they had lost. Their “wisdom of the body” was such that, even if the sheep were given salt solutions of varying concentrations, they still consumed the amount required to replace the deficit.
The work of Denton and others strongly supported the view that there is an innate hunger for salt and that the brain controls this behavior (Schulkin 1991). But although there are a number of minerals that are “essential” nutrients, only sodium seems to command a “built-in” hunger; there is no “innate” craving for magnesium or potassium, to choose two examples. On the other hand, it is likely that, in the past, the hunger for sodium abetted the intake of other essential minerals, which would usually have been found in the same “salt-licks” as sodium. Sodium hunger does not require “learning,” although significant “learning” does interact with innate mechanisms to help guide sodium-hunger behavior (Schulkin 1991).The following is a summary of Schulkin’s ideas of the steps involved in the innate sodium-hunger pathway:
- An animal is “sodium-hungry.” (This would result from either a reduction in salt intake or excessive excretion of sodium from nonrenal sources, such as intestines or sweat glands.)
- A “representation” of salty taste is activated in the brain.
- The representation serves to guide the animal’s behavior in its search for salt—including its location, identification, and ingestion of the mineral.
- Innate mechanisms are responsible for the sodium-hungry animal:
- Ingesting the salt immediately upon its first exposure (no “learning” is required for this), and
- Noting the significance of salt when not sodium-hungry.
- Thus, in terms of b, there is a hedonic shift in the perception of salt that emerges in the salt-hungry animal.
- The result is a motivated behavior with appetite (physiologic need) and consummatory phases (behavioral want) in search of salt.
Sodium is vital in maintaining the pressure and volume of the blood and the extracellular fluid. A major purpose of the blood is to carry intracellular fluids, bringing nutrients to the cells and removing metabolic products from them. As blood flows through the capillaries, water—containing nutrients—passes from the capillaries into the extracellular spaces to bathe the cells with nutrients and pick up cellular metabolic products (mostly waste), which are then swept by water movement back into the veins and carried to the kidney, liver, and lungs for metabolism or excretion. Sodium is also important in the transmission of nerve impulses, helps to metabolize carbohydrates and proteins, and has independent interactions with other ions—such as those of potassium, calcium, and chlorine—to maintain the “sea within” us. But most importantly, from a medical viewpoint, sodium is a vital factor in the regulation of blood pressure.
Sodium is measured in units of moles or grams. For nutritional purposes, grams are used, usually milligrams (1 gram [g] = 1,000 milligrams [mg]); for clinical purposes (to measure concentration), millimoles per liter are used (1 mole = 1,000 millimoles or mmols). One mole equals the molecular weight of the element. As the atomic weight of sodium is 23, 1 mole of sodium is equal to 23 grams of sodium (23,000 mg), and 2,300 mg of sodium is the same as 100 mmols of sodium.
Until the late 1940s, the measurement of sodium in both biological fluids and diets was a mostly laborious process that required the skills of a quantitative analytical chemist using 13 different steps—including, among others, “ashing,” extracting, evaporating, precipitating, washing, and weighing sodium yields on a microbalance—to determine the quantity in a single sample (Butler and Tuthill 1931). But in 1945, a revolution in the analytic accuracy and speed of sodium measurement was begun with the first report of the use of the flame photometric method. In 1945, this technique still required precipitation of plasma before analysis, but by 1947, only dilution of plasma was required (Overman and Davis 1947), and by 1949, instruments were available that could provide very accurate results within 5 minutes using either plasma or urine. By 1953, these devices were in wide use (Barnes et al. 1945; Mosher et al. 1949; Wallace et al. 1951).
The body has built-in “set points” designed to maintain sodium in homeostasis. When it takes in less salt than is lost in the urine, sweat, and stool, the concentration of sodium in the blood falls. When the blood sodium falls below an inherited “set point” (about 140 mmol per liter of serum), an area of the brain that is bathed by blood senses the decreased sodium concentration and activates hormonal defenses to maintain a constant concentration of the mineral. If the concentration of sodium continues to diminish, the kidneys will adjust by accelerating the excretion of water, so that the blood’s sodium concentration is maintained at the vital level. If the sodium supply is not replenished, there is a gradual desiccation of the body and, finally, death. In other words, a lack of sufficient sodium causes the organism literally to die of thirst.
By contrast, if blood sodium increases above the set-point level, a secretion of antidiuretic hormone (ADH) is released by the pituitary gland, and thirst mechanisms are activated to find and ingest water until the sodium concentration is reduced. At the same time, ADH causes the kidneys to excrete less water in an attempt to keep the body’s sodium at the correct concentration. If, however, the water supply is not replenished, more sodium will be excreted, and eventually, these water losses will lead to death.
The overriding mechanism that regulates total body sodium (and blood pressure) has been termed the “renal-fluid volume mechanism for pressure control” by A. C. Guyton and colleagues (1995). An analysis of the factors controlling blood pressure has shown that it can only be raised by one of two mechanisms: increasing the intake of dietary salt or limiting the kidney’s ability to excrete sodium.
Sodium and Human Evolution
The body’s need for sodium may also have played a role in genetic variability within the human species. During the 1980s, theories were proposed that suggested such a role in two diseases related to salt metabolism: cystic fibrosis and hypertension.
Cystic fibrosis (CF) is a recessive genetic condition related to sodium metabolism, in which, it was hypothesized, the carrier state had been protective of fluid and electrolyte loss during epidemics of diarrhea in human history. CF carriers, notably children before the age of reproduction, were thought to have protective mechanisms that diminished the loss of water during episodes of infectious diarrhea. Thus, individuals who were genetically enabled to control water and salt losses were more likely to survive to reproductive age. Indeed, the heterozygote carrier has been shown to have less sodium loss in feces than the homozygote noncarrier (Gabrial et al. 1994).
A more controversial evolutionary hypothesis is that one form of hypertension (high blood pressure)—“salt-sensitive” hypertension, which has a high frequency among African-Americans—may result, in part, from genetic adaptation to the African environment and its diseases of the past. More specifically, it has been suggested that—both during the transAtlantic slave trade and during the period of slavery itself—individuals able to conserve sodium would have been more likely to survive the dehydrating diseases aboard ship, as well as the debilitation of hard physical labor. If so, then this past experience might be partially responsible for today’s prevalence of “salt-sensitive” high blood pressure among black people in the Western Hemisphere (Wilson and Grim 1991; Curtin 1992; Grim and Wilson 1993).
When humans go without salt in the diet, or lose it because of illness, the major symptoms are apathy, weakness, fainting, anorexia, low blood pressure, and, finally, circulatory collapse, shock, and death. Sir William Osler (1978: 121-2), observing dehydrated cholera patients in the late nineteenth century, provided a classic description of the condition:
[P]rofuse liquid evacuations succeed each other rapidly … there is a sense of exhaustion and collapse … thirst becomes extreme, the tongue white: cramps of great severity occur in the legs and feet. Within a few hours vomiting sets in and becomes incessant. The patient rapidly sinks into a condition of collapse, the features are shrunken, the skin of an ashy gray hue, the eyeballs sink in the sockets, the nose is pinched, the cheeks are hollow, the voice becomes husky, the extremities are cyanosed, and the skin is shriveled, wrinkled and covered with a clammy perspiration. … The pulse becomes extremely feeble and flickering, and the patient gradually passes into a condition of coma.
Many cholera patients in the past could have been saved with rehydration therapy, and it is a central tenet in modern medical treatment that lost body fluids should be replaced with others of the same composition. Replacing a salt loss by giving water or a water loss by giving salt can be fatal. Although a history of the illness and an examination of the patient can provide clues to the type of loss, the best method is to test the blood and urine chemically—a method that only became possible in the 1930s, with the most useful test that which determined the amount of chloride in urine. Accomplished by simply mixing 10 drops of urine with one drop of an indicator and then adding silver nitrate, a drop at a time, until the end point was reached, this was called the “Fantus test” after Dr. Bernard Fantus at the University of Chicago.
This test proved so useful in treating salt- and water-depleted British soldiers in India and Southeast Asia during the mid-1930s that Dr. H. L. Marriott, in his classic text on fluid replacement therapy, stated: “It is my belief that the means of performing this simple test should be available in all ward test rooms and in every doctor’s bag” (Marriott 1950: 56).
Most early studies of sodium depletion in humans were prompted by diseases. One was Addison’s disease (in which the adrenal gland that makes the sodium-retaining hormone for the body stops working), and another was diabetes (in which a high level of blood glucose forces the excretion of large amounts of water by the kidneys). Such studies were also conducted in cases of extreme depletion brought on by starvation or acute diarrhea. In the 1930s, however, R.A. McCance (1935-6) published his report of a series of experiments that established a baseline on the clinical nature and physiology of sodium depletion in humans.
To induce salt depletion, McCance (1935-6) employed a sodium-free diet combined with sweating. (Because laboratory animals do not sweat, he used humans as his test subjects.) There was no “research-quality” kitchen available, the food was prepared in the McCance home, and the subjects of the experiment—all volunteers—slept and ate there. The diet consisted of sodium-free “casein” bread, synthetic salt-free milk, sodium-free butter, thrice-boiled vegetables, jam, fruit, homemade sodium-free shortbread, and coffee. During recovery periods, the volunteers ate weighed quantities of high-sodium foods (such as anchovies and bacon) and small, weighed amounts of sodium chloride. Sweating was induced by placing the subjects in a full-length radiant heat bath—for two hours with the heat on and then 10 minutes with the heat off. Their sweat was collected in rubber sheets, and a final washing of each subject with distilled water ensured that even small amounts of lost sodium would be accounted for. The subjects’ average sweat loss was 2 liters, and they commented that the washing procedure was “not uncomfortable” after 2 hours in the hot bath (McCance 1935-6).
By reducing sodium in the diet, along with inducing sodium losses through sweating, McCance and his colleagues found that only a week was required to make healthy subjects seriously sodium depleted. They maintained 4 volunteers in this condition for an additional 3 to 4 days, so that the total period of deprivation lasted about 11 days.
Detailed measurements of intake (food and water) and output (sweat, urine, and feces) recorded that the subjects lost 22.5 g of sodium and 27.2 g of chlo-ride—or about 50 g of salt. Their body weights dropped by about 1 kilogram (kg) per day, and sodium excretion averaged 3,400 mg of sodium per day for the first 4 days. Weights then stabilized, but sodium loss continued.
As the deficiency progressed, the volunteers all experienced feelings of physical fatigue, anorexia, nausea, difficulty in urinating, and extremely weak pulses. Muscle spasms and cramps—especially cramps in the fingers—were common. The subjects’ faces became drawn and “ill-looking,” and they slowed mentally, becoming dull and apathetic. McCance was struck by the similarity of a number of these symptoms to those of Addison’s disease, but the symptoms and signs all rapidly cleared up when the volunteers resumed consumption of sodium (McCance 1935-6).
Both before and during World War II, as many in the Allied armed forces were severely disabled by heat- and water-related illnesses, there was intense interest in understanding the mechanics of water and salt metabolism and the effects of heat on the human body. Research was even undertaken to see how long a man could survive on a raft in the ocean, or in the desert, so that restrictions could be placed on certain military activities (such as limiting the duration of searches for lost aviators, who, after the specified survival time had passed, might reasonably be presumed dead). Other studies examined the conditions that servicemen could be forced to work under and defined safe limits. For example, after 5 hours of marching in the heat, with full packs and no water, even well-conditioned marines could not continue (Ladell 1949).
During and after World War II, there was also interest in the effects of diarrhea. J. L. Gamble (1945) showed that intestinal secretions contained more sodium than chloride, and D. A. K. Black (1946) reported a series of experiments on 10 men with blood pressures averaging 94 mm Hg SBP/59 mm Hg DBP, who were victims of tropical sprue (a disease characterized by chronic diarrhea). The patients were bedridden, listless, and incapable of exertion. But these symptoms disappeared—and blood pressure rose to normal—with sodium supplementation (Black 1946).
Humans, as noted, have evolved complex “redundant systems” to regulate sodium and other essential minerals. For marine animals, deriving sodium from the sea was a relatively easy matter. As evolution progressed, however, satisfaction of sodium needs became a more complicated task. Land dwellers had first to locate sources of sodium, then ingest the mineral, and, further, conserve it within their bodies. To achieve this, physiological and behavioral mechanisms evolved that were designed primarily to protect against a life-threatening deficit of sodium, such as can occur with vomiting, sweating, diarrhea, or kidney malfunction.
But although the body’s systems are reasonably effective against sodium deficit, evolution did not do as well in protecting humans against an excessive intake of the mineral. There are two different kinds of excessive sodium intake: (1) acute ingestion of salt without water, or of very salty water (such as seawater or other briny water); and (2) chronic ingestion of toxic levels of sodium in the food supply.
It seems likely that the former was never a major problem in the history of humankind and probably occurred only when people felt forced to drink seawater or the water from salt springs or salt lakes. Chronic ingestion of excess salt in food, however, is both a recent and a very real problem. Until the past few centuries, salt intake was primarily determined by the amount a person chose to add to food (“active intake”). Increasingly, however, as foods were preserved in salt, and especially today with foods processed with salt, the great majority of salt intake has become “passive,” meaning that food processors and manufacturers—not consumers—decide the quantity of salt to be included in the foods they produce.
Indeed, it has been estimated that in prehistoric times, the daily human intake of sodium was about 690 mg, with 148 mg derived from vegetables and 542 mg from meat (Eaton and Konner 1985). By contrast, today the U.S. Food and Drug Administration (FDA) recommends keeping dietary intake to 2,400 mg of sodium per day (the amount contained in 6 g—or about 1 teaspoon—of table salt). This is roughly the same amount accepted by an advocacy group that promotes a low-salt diet, Consensus Action on Salt and Hypertension (CASH). Needless to say, even such a “low-salt” diet still delivers 3.5 times the amount of sodium provided by the meat and vegetables of the Paleolithic diet.
Although there is also concern over high salt intake by children, food processors only relatively recently halted their practice of adding sodium to baby food. (Presumably, the mineral was meant to improve the food’s flavor for parents.) This change for the better followed the observation by L. K. Dahl, M. Heine, G. Leitl, and L. Tassinari (1970) that young rats with a high sodium intake became hypertensive and remained so for the rest of their lives. But despite this discovery, the average sodium intake by two-year-olds in the United States remains higher than the amount recommended by the FDA and by CASH (Berenson et al. 1981).
Moreover, the average intake throughout much of the industrialized world today is about 10 g of table salt (3,900 mg of sodium) per person per day (James, Ralph, and Sanchez-Castillo 1987; USDA/USDHHS 1995). But only about 10 percent of the sodium consumed occurs naturally in foods; another 15 percent is added by consumers (“active intake”), and the remaining 75 percent is added to food by manufacturers and processors. Therefore, based upon the average industrial diet, only 390 mg of sodium is naturally occurring, 585 mg is added by the consumer, and a substantial 2,925 mg is derived (“passive intake”) from sodium added during processing (James et al. 1987).
Obviously, then, a low-salt diet like that of our ancient ancestors seems an impossible goal; increasingly, the population is at the mercy of the food industry. Yet, that industry gains several advantages by using excessive salt in food processing. Salt, added to food, improves flavor and palatability, especially for those who have become addicted to its taste. Salt increases the “shelf-life” of food (although, with today’s effective packaging and refrigeration technology, there is little need for the presence of near-toxic levels of sodium in the food supply), and salt adds very inexpensive weight to the final product (by helping to retain water), thereby increasing profit margins. Finally, increased salt consumption stimulates thirst and thus increases consumption of the beverages sold by many of the major food companies (McGregor 1997).
History of Hypertension
“Blood pressure” refers to the force exerted against the walls of the arteries as blood is pumped throughout the body by the heart. This pressure can be measured, and abnormally high levels indicate a condition of “high blood pressure,” or hypertension, which is classified into two types. Secondary hypertension is that resulting from a known cause, such as a disease of the kidneys, whereas essential or primary hyper-tension arises with no evident cause. More than 90 percent of hypertension cases fall into the latter category (Wilson and Grim 1993).
Salt consumption was linked with blood pressure long ago—as early as the second century B.C. in ancient China. In the Huang-ti nei-ching (“The Inner Classic of the Yellow Emperor”), it was written that “if too much salt is used for food, the pulse hardens” (Swales 1975: 1).
In 1836, Richard Bright reported on the kidneys and hearts of 100 patients who had died of kidney problems. He noted that, in most instances, when the kidney was small and shrunken, the heart was often markedly enlarged—an indication of high blood pressure (Bright 1836). In the late nineteenth century, R. Tigerstedt and T. G. Bergman (1898) coined the term “renin” in their report that saline extracts from kidney tissue raised blood pressure. A few years later, French researchers L. Ambard and E. Beaujard noted that blood pressure was lowered by the dietary restriction of sodium chloride and raised by its addition or increase; they believed, however, that chloride was the culprit because they were unable to measure sodium content (Ambard and Beaujard 1904).
Many attempts to induce high blood pressure in experimental animals followed but proved inconclusive. In 1934, however, Harry Goldblatt noted that when he placed an adjustable clamp so that it partially blocked the renal artery of a dog, the animal developed a rapid increase in blood pressure that was sustained so long as the clamp remained in place. Research to determine the mechanism causing this high blood pressure began, and by 1940, two teams of investigators (Eduardo Braun-Menendez and colleagues in Buenos Aires and Irving Page and O. M. Helmer in Indianapolis) succeeded in isolating a material that caused vasoconstriction. Both groups reported that there was a substance—coming from the kidneys—that, when mixed with blood, generated a potent vasoconstricting and blood-pressure-raising substance. The South American group called this “hypertensin” and the U.S. group “angiotonin,” but eventually the two were determined to be the same chemical, which was named “angiotensin.”
Within a few years, W. Kempner (1944) reported that a low-sodium diet decreased blood pressure and heart size even in cases of malignant hypertension. The following year, he described the effects of a low-sodium diet on 65 patients who were hypertensive but showed no evidence of renal disease. After an average of only 48 days on a rice-fruit diet, the average blood-pressure readings decreased from a systolic blood pressure (SBP) of 197 mm Hg and a diastolic blood pressure (DBP) of 115 mm Hg to an SBP of 151 mm Hg and a DBP of 97 mm Hg. In those patients who experienced a decrease in blood pressure, the response was obvious within the first 7 to 10 days on the low-sodium diet, and Kempner observed that the maximum decrease in blood pressure was first attained after only 10 days.
Most early blood-pressure studies were on whites, yet the prevalence of high blood pressure in blacks is much greater and, as mentioned, is thought to result, at least in part, from salt intake. The first report of an association between salt intake and blood pressure in African-Americans was by A. Grollman and colleagues (1945), who studied patients given less than 1 gram (<1,000 mg) of sodium chloride in their daily diets. In the case of two black women, blood pressure declined to normal, promptly rose when salt intake was increased, and then fell again when the low-sodium diet was resumed.
Five years later, V. P. Dole and colleagues (1950) reported the results of a series of studies to evaluate the sodium content of Kempner’s rice-fruit diet and its effect on blood pressure and heart size. They confirmed all of Kempner’s observations and, further, documented that the effect of the diet was related to its low sodium content and not low chloride (Dole et al. 1950).
The introduction of diuretics, which act to excrete sodium and water via the kidneys, came in the late 1950s, and these were quickly shown to lower blood pressure significantly (Freis et al. 1958). In the 1970s, however, a series of observations made by a research group in Indianapolis (which included Clarence E. Grim, co-author of this chapter) once again focused attention on the relationship between dietary sodium and blood pressure. This culminated in several reports indicating that even a normotensive individual would experience an increase in blood pressure if enough salt was consumed (Murray et al. 1978; Luft et al. 1979). In addition, ethnic differences in sodium metabolism and blood-pressure responses were documented in normotensive subjects; such evidence demonstrated, for the first time, that blacks were more sensitive to salt than whites. These studies also demonstrated the enormous capacity of human kidneys to excrete sodium, which was proof of Guyton’s hypothesis that excess salt will always increase blood pressure (Guyton et al. 1995).
Current Thinking on Sodium and Hypertension
Cardiovascular disease is now a major cause of death in all countries of the world, and high blood pressure is the most common (although treatable) precursor of heart disease. Unfortunately, as pointed out in a recent review by J. P. Midgley, A. G. Matthew, C. M. T. Greenwood, and A. G. Logan (1996), most sodium-reduction trials in hypertensive subjects have not yet produced definitive evidence that reducing sodium intake improves long-term health (but see also Staessen et al. 1997). Indeed, some have argued (based on a single observational study) that reducing salt intake may do more harm than good (Alderman et al. 1995).
Nonetheless, high dietary salt intake has been reported to be associated with other adverse medical outcomes, including death from stroke, enlargement of the heart (a precursor to congestive heart failure), and even asthma mortality (Antonios and McGregor 1995). Moreover, recent population-based studies seem to confirm that the major cause of high blood pressure is an excessive dietary intake of sodium. Using a standardized protocol in a 52-center worldwide study, INTERSALT showed a positive, significant, linear relationship between salt intake and blood pressure. The statistical relationship suggested that each 100 mmol of sodium intake was responsible for a 2 mm Hg increase in systolic blood pressure. In addition, the INTERSALT findings suggest a powerful effect of dietary sodium intake on the rise in blood pressure with age. It has also been argued that a reduction in the average intake of sodium in Britain—from 150 mmol to 100 mmol per day—could reduce strokes and heart attacks in that nation by 22 percent and 16 percent, respectively, and would have a greater impact than that of all of the drugs used to treat high blood pressure (McGregor and Sever 1996).
For the majority of hypertensive persons, it seems well established that lifestyle improvements, such as lowering dietary sodium intake while increasing dietary potassium intake, reducing body weight, and increasing exercise, can lower blood pressure. Although such efforts would likely lower a person’s blood pressure by “only” 5 mm Hg, it is important to consider the societal health benefits of such a downward shift across the entire population. From a population-wide perspective, this could dramatically reduce the prevalence of hyper-tension and cardiovascular disease.
It has been estimated that it costs $1,000 per year to treat each hypertensive person (Elliott 1996). If lifestyle changes could lower the average blood pressure by only 5 mm Hg, then 21.4 million persons would no longer require treatment. This would save the United States about $21 billion a year in the cost of health care for hypertension alone, not to mention the costs saved by the reduction of strokes by about 40 percent and cardiovascular disease by about 25 percent. The potential economic advantage of implementing low-cost lifestyle changes to lower blood pressure across society should be obvious. Indeed, it is clear that most countries of the world will not be able to afford expensive medical therapies to control high blood pressure and should be implementing public-health strategies to lower blood pressure and the devastating consequences of hypertension for a country’s workforce.