Www.WorldHistory.Biz
Login *:
Password *:
     Register

 

13-03-2015, 01:16

An Overview

Functionally, it seems appropriate to begin this overview of the work with an explanation of the last part first, because we hope that the entries in Part YIII, which identify and sketch out brief histories of vegetable foods mentioned in the text, will constitute an important tool for readers, especially for those interested in the chapters on geographic regions. Moreover, because fruits have seldom been more than seasonal items in the diet, all of these save for a few rare staples are treated in Part YIII. Most readers will need little explanation of foods such as potatoes (also treated in a full chapter) or asparagus but may want to learn more about lesser-known or strictly regional foods such as ackee or zamia (mentioned in the chapters that deal

With the Caribbean area). On the one hand, Part VIII has spared our authors the annoyance of writing textual digressions or footnotes to explain such unfamiliar foods, and on the other, it has provided us with a splendid opportunity to provide more extensive information on the origins and uses of the foods listed. In addition, Part VIII has become the place in the work where synonyms are dealt with, and readers can discover (if they do not already know) that an aubergine is an eggplant, that “swedes” are rutabagas, and that “Bulgar” comes from bulghur, which means “bruised grain.”

We now move from the end to the beginning of the work, where the chapters of Part I collectively constitute a bioanthropological investigation into the kinds and quantities of foods consumed by early humans, as well as by present-day hunter-gatherers. Humans (in one form or another) have been around for millions of years, but they only invented agriculture and domesticated animals in the past 10,000 years or so, which represents just a tiny fraction of 1 percent of the time humankind has been present on earth. Thus, modern humans must to some extent be a product of what our ancient ancestors ate during their evolutionary journey from scavengers to skilled hunters and from food gatherers to food growers.

The methods for discovering the diet (the foods consumed) and the nutritional status (how the body employed those foods) of our hunting-and-gathering forebears are varied. Archaeological sites have yielded the remains of plants and animals, as well as human coprolites (dried feces), that shed light on the issue of diet, whereas analysis of human remains - bones, teeth, and (on occasion) soft tissue - has helped to illuminate questions of nutrition. In addition, the study of both diet and nutrition among present-day hunter-gatherers has aided in the interpretation of data generated by such archaeological discoveries. The sum of the findings to date seems to suggest that at least in matters of diet and nutrition, our Paleolithic ancestors did quite well for themselves and considerably better than the sedentary folk who followed. In fact, some experts contend that the hunter-gatherers did better than any of their descendants until the late nineteenth and early twentieth centuries.

Part II shifts the focus from foraging to farming and the domestication of plants and animals. The transition from a diet of hunted and collected foods to one based on food production was gradual, yet because its beginnings coincided with the time that many large game animals were disappearing, there is suspicion that necessity, born of an increasing food scarcity, may have been the mother of agricultural invention. But however the development of sedentary agriculture came about, much of the blame for the nutritional deterioration that appears to have accompanied it falls on the production of the so-called superfoods - rice, maize, manioc, and wheat - staples that have sustained great numbers of people but only at a considerable cost in human health, in no small part because diets that centered too closely on such foods could not provide the range of vitamins, minerals, and whole protein so vital to human health.

Part II is divided into sections, or groups of chapters, most of which consider the history of our most important plant foods under a number of rubrics ranging from “Grains,” “Roots, Tubers, and Other Starchy Staples,” through “Important Vegetable Supplements,” to plants that are used to produce oils and those employed for flavorings. All of the chapters dealing with plants treat questions of where, how, and by whom they were first domesticated, along with their subsequent diffusion around the globe and their present geographic distribution. With domestication, of course, came the dependence of plants on humans along with the reverse, and this phenomenon of “mutualism” is explored in some detail, as are present-day breeding problems and techniques.

The historical importance of the migration of plant foods, although yet to be fully weighed for demographic impact, was vital - although frequently disruptive - for humankind. Wheat, a wild grass that flourished in the wake of retreating glaciers some 12,000 years ago, was (apparently) deliberately planted for the first time in the Middle East about 2,000 years later. By the first century B. C., Rome required some 14 million bushels per year just to feed the people of that city, leading to a program of expansion that turned much of the cultivable land of North Africa into wheatfields for the Romans. Surely, then, Italians had their pastas long before Marco Polo (1254?-1324?), who has been credited with bringing notions of noodles back with him from China. But it was only the arrival of the vitamin C-loaded American tomato that allowed the Italians to concoct the great culinary union of pasta and tomato sauce - one that rendered pasta not only more satisfactory but also more healthy. And, speaking of China, the New World’s tomato and its maize, potatoes, sweet potatoes, and peanuts were also finding their respective ways to that ancient land, where in the aftermath of their introduction, truly phenomenal population increases took place.

Migrating American plants, in other words, did much more than just dress up Old World dishes, as tomatoes did pasta. Maize, manioc, sweet potatoes, a new kind of yam, peanuts, and chilli peppers reached the western shores of Africa with the ships of slave traders, who introduced them into that continent to provide food for their human cargoes. Their success exceeded the wildest of expectations, because the new foods not only fed slaves bound for the Americas but helped create future generations of slaves. The American crops triggered an agricultural revolution in Africa, which in greatly expanding both the quantity and quality of its food supply, also produced swelling populations that were drained off to the Americas in order to grow (among other things) sugar and coffee - both migrating plants from the Old World.

In Europe, white potatoes and maize caught on more slowly, but the effect was remarkably similar. Old

World wheat gave back only 5 grains for every 1 planted, whereas maize returned 25 to 100 (a single ear of modern maize yields about 1,000 grains) and, by the middle of the seventeenth century, had become a staple of the peasants of northern Spain, Italy, and to a lesser extent, southern France. From there maize moved into much of the rest of Europe, and by the end of the eighteenth century, such cornmeal mushes (polenta in Italy) had spread via the Ottoman Empire into the Balkans and southern Russia.

Meanwhile, over the centuries, the growth of cities and the development of long-distance trade - especially the spice trade - had accelerated the process of exploring the world and globalizing its foods. So, too, had the quest for oils (to be used in cooking, food preservation, and medicines), which had been advanced as coconuts washed up on tropical shores, olive trees spread across the Mediterranean from the Levant to the rim of the Atlantic in Iberia, and sesame became an integral part of the burgeoning civilizations of North. Africa and much of Asia.

In the seventeenth century, invasion, famine, and evictions forced Irish peasants to adopt the potato as a means of getting the most nourishment from the least amount of cultivated land, and during the eighteenth century, it was introduced in Germany and France because of the frequent failures of other crops. From there, the plant spread toward the Ural Mountains, where rye had long been the only staple that would ripen during the short, often rainy summers. Potatoes not only did well under such conditions, they provided some four times as many calories per acre as rye and, by the first decades of the nineteenth century, were a crucial dietary element in the survival of large numbers of northern Europeans, just as maize had become indispensable to humans in some of the more southerly regions.

Maize nourished humans indirectly as well. Indeed, with maize available to help feed livestock, it became increasingly possible to carry more animals through the winters and to derive a steady supply of whole protein in the forms of milk, cheese, and eggs, in addition to year-round meat - now available for the many rather than the few. Thus, it has been argued that it is scarcely coincidence that beginning in the eighteenth century, European populations began to grow and, by the nineteenth century, had swollen to the point where, like the unwilling African slaves before them, Europeans began migrating by the millions to the lands whose plants had created the surplus that they themselves represented.

The last section of Part II treats foods from animal sources ranging from game, bison, and fish to the domesticated animals. Its relatively fewer chapters make clear the dependence of all animals, including humans, on the plant world. In fact, to some unmeasurable extent, the plant foods of the world made still another important contribution to the human diet by assisting in the domestication of those animals that - like the dog that preceded them - let themselves be tamed.

The dog seems to have been the first domesticated animal and the only one during the Paleolithic age. The wolf, its progenitor, was a meat eater and a hunter (like humans), and somewhere along the way, humans and dogs seem to have joined forces, even though dogs were sometimes dinner and probably vice versa. But it was during the early days of the Neolithic, as the glaciers receded and the climate softened, that herbivorous animals began to multiply, and in the case of sheep and goats, their growing numbers found easy meals in the grains that humans were raising (or at least had staked out for their own use). Doubtless, it did not take the new farmers long to cease trying to chase the animals away and to begin capturing them instead - at first to use as a source of meat to go with the grain and then, perhaps a bit later, to experiment with the fleece of sheep and the waterproof hair of goats.

There was, however, another motive for capturing animals, which was for use in religious ceremonies involving animal sacrifice. Indeed, it has been argued that wild water buffalo, cattle, camels, and even goats and sheep were initially captured for sacrifice rather than for food.

Either way, a move from capturing animals to domestication and animal husbandry was the next step in the case of those animals that could be domesticated. In southeastern Europe and the Near East (the sites of so much of this early activity), wild goats and sheep may have been the first to experience a radical change of lifestyle - their talent for clearing land of anything edible having been discovered and put to good use by their new masters. Soon, sheep were being herded, with the herdsmen and their flocks spreading out far and wide to introduce still more humans to the mysteries and rewards of domestication.

Wild swine, by contrast, were not ruminant animals and thus were not so readily attracted to the plants in the fields, meaning that as they did not come to humans, humans had to go to them. Wild boars had long been hunted for sacrifice as well as for meat and would certainly have impressed their hunters with their formidable and ferocious nature. Tricky, indeed, must have been the process that brought the domesticated pig to the barnyard by about 7000 to 6000 B. C.

Wild cattle were doubtless drawn to farmers’ fields, but in light of what we know about the now-extinct aurochs (the wild ancestor of our modern cattle), the domestication of bovines around 6000 B. C. may have required even more heroic efforts than that of swine. Yet those efforts were certainly worth it, for in addition to the meat and milk and hides cattle provided, the ox was put to work along with sheep and goats as still another hand in the agricultural process - stomping seeds into the soil, threshing grain, and pulling carts, wagons, and (later on) the plow.

The last of today’s most important animals to be domesticated was the chicken, first used for sacrifice and then for fighting before it and its eggs became food. The domesticated variety of this jungle bird was present in North China around 3000 B. C.; however, because the modern chicken is descended from both Southeast Asian and Indian wildfowl, the question of the original site of domestication has yet to be resolved. The wildfowl were attracted to human-grown grain and captured, as was the pigeon (which, until recently, played a far more important role in the human diet than the chicken). Ducks, geese, and other fowl were also most likely captivated by - and captured because of - the burgeoning plant-food products of the Neolithic. In other parts of the world, aquatic animals, along with the camel, the yak, and the llama and alpaca, were pressed into service by Homo sapiens, the “wise man” who had not only scrambled to the top of the food chain but was determinedly extending it.

The chapters of Part III focus on the most important beverages humans have consumed as accompaniment to those foods that have preoccupied us to this point. One of these, water, is crucial to life itself; another, human breast milk, has - until recently, at least - been vital for the survival of newborns, and thus vital for the continuation of the species. Yet both have also been sources of infection for humans, sometimes fatally so.

Hunter-gatherers, in general, did not stay in one place long enough to foul springs, ponds, rivers, and lakes. But sedentary agriculturalists did, and their own excreta was joined by that of their animals. Wherever settlements arose (in some cases as kernels of cities to come), the danger of waterborne disease multiplied, and water - essential to life - also became life-threatening. One solution that was sensible as well as pleasurable lay in the invention of beverages whose water content was sterilized by the process of fermentation. Indeed, the earliest written records of humankind mention ales made from barley, millet, rice, and other grains, along with toddies concocted from date palms and figs - all of which makes it apparent that the production of alcohol was a serious business from the very beginning of the Old World Neolithic.

It was around 3000 B. C. that grape wine made its appearance, and where there was honey there was also mead. The discovery of spirit distillation to make whiskeys and brandies began some seven to eight hundred years ago, and true beer, the “hopped” successor of ales, was being brewed toward the end of the Middle Ages (about 600 years ago). Clearly, humans long ago were investing much ingenuity in what can only be described as a magnificent effort to avoid waterborne illness.

Milk, one of the bonuses of animal domestication, was also fermented, although not always with desired outcomes. Yet over time, the production of yoghurts, cheeses, and butter became routine, and these foods - with their reduced lactose - were acceptable even among the lactose-intolerant, who constituted most of the world’s population. Where available, milk (especially bovine milk) was a food for the young after weaning, and during the past few centuries, it has also served as a substitute for human milk for infants, although sometimes with disastrous results. One problem was (and is) that the concentrated nutrient content of bovine milk, as well as human antibodies developed against cow’s-milk protein, make it less than the perfect food, especially for infants. But another was that bovine tuberculosis (scrofula), along with ordinary tuberculosis, raged throughout Europe from the sixteenth to the nineteenth centuries. Wet nurses were another solution for infant feeding, but this practice could be fraught with danger, and artificial feeding, especially in an age with no notions of sterile procedure, caused infants to die in staggering numbers before the days of Joseph Lister and Louis Pasteur.

Boiling water was another method of avoiding the pathogens it contained, and one that, like fermentation, could also produce pleasant beverages in the process. The Chinese, who had used tea since the Han period, embraced that beverage enthusiastically during the Tang dynasty (618-907) and have been avid tea drinkers ever since. The nomads of central Asia also adopted the drink and later introduced it into Russia. Tea use spread to Japan about the sixth century, but it became popular there only about 700 years ago. From Japan, the concoction was introduced into Indonesia, where much later (around 1610) the Dutch discovered it and carried it to Europe. A few decades later, the English were playing a major role in popularizing the beverage, not to mention merchandising it.

Coffee, although it found its way into Europe at about the same time as tea, has a more recent history, which, coffee-lore would have it, began in Ethiopia in the ninth century. By 1500, coffee drinking was widespread throughout the Arab world (where alcohol was forbidden), and with the passing of another couple of centuries, the beverage was enjoying a considerable popularity in Europe. Legend has it that Europeans began to embrace coffee after the Ottoman Turks left some bags of coffee beans behind as they gave up the siege of Vienna in 1683.

These Asian and African contributions to the world’s beverages were joined by cacao from America. Because the Spaniards and the Portuguese were the proprietors of the lands where cacao was grown, they became the first Europeans to enjoy drinking chocolate (which had long been popular among preColumbian Mesoamericans). In the early decades of the sixteenth century, the beverage spread through Spain’s empire to Italy and the Netherlands and, around midcentury, reached England and France.

Thus, after millennia of consuming alcoholic beverages to dodge fouled water, people now had (after a century or so of “catching on”) an opportunity for relative sobriety thanks to these three new drinks, which all arrived in Europe at about the same time. But an important ingredient in their acceptance was the sugar that sweetened them. And no wonder that as these beverages gained in popularity, the slave trade quickened, plantation societies in the Americas flourished, and France in 1763 ceded all of Canada to Britain in order to regain its sugar-rich islands of Martinique and Guadeloupe.

Sugar cultivation and processing, however, added still another alcoholic beverage - rum - to a growing list, and later in the nineteenth century, sugar became the foundation of a burgeoning soft-drink industry. Caffeine was a frequent ingredient in these concoctions, presumably because, in part at least, people had become accustomed to the stimulation that coffee and tea provided. The first manufacturers of Coca-Cola in the United States went even further in the pursuit of stimulation by adding coca - from the cocaine-containing leaves that are chewed in the Andean region of South America. The coca was soon removed from the soft drink and now remains only in the name Coca-Cola, but “cola” continued as an ingredient. In the same way that coca is chewed in South America, in West Africa the wrapping around the kola nut is chewed for its stimulative effect, in this case caused by caffeine. But the extract of the kola nut not only bristles with caffeine, it also packs a heart stimulant, and the combination has proven to be an invigorating ingredient in the carbonated beverage industry.

In East Africa, the leaves of an evergreen shrub called khat are chewed for their stimulating effect and are made into a tealike beverage as well. And finally, there is kava, widely used in the Pacific region and among the most controversial, as well as the most exotic, of the world’s lesser-known drinks - controversial because of alleged narcotic properties and exotic because of its ceremonial use and cultural importance.

In addition to the beverages that humans have invented and imbibed throughout the ages as alternatives to water, many have also clung to their “waters.” Early on, special waters may have come from a spring or some other body of water, perhaps with supposed magical powers, or a good flavor, or simply known to be safe. In more recent centuries, the affluent have journeyed to mineral springs to “take the waters” both inside and outside of their bodies, and mineral water was (and is) also bottled and sold for its allegedly healthy properties. Today, despite (or perhaps because of) the water available to most households in the developed world, people have once more staked out their favorite waters, and for some, bottled waters have replaced those alcoholic beverages that were previously employed to avoid water.

Part IV focuses on the history of the discovery and importance of the chief nutrients, the nutritional deficiency diseases that occur when those nutrients are not forthcoming in adequate amounts, the relationship between modern diets and major chronic diseases, and food-related disorders. Paradoxically, many such illnesses (the nutritional deficiency diseases in particular), although always a potential hazard, may have become prevalent among humans only as a result of the development of sedentary agriculture.

Because such an apparently wide variety of domesticated plant and animal foods emerged from the various Neolithic revolutions, the phenomenon of sedentary agriculture was, at least until recently, commonly regarded as perhaps humankind’s most important step up the ladder of progress. But the findings of bioanthropologists (discussed in Part I) suggest rather that our inclination to think of history teleologically had much to do with such a view and that progress imposes its own penalties (indeed, merely to glance at a newspaper is to appreciate why many have begun to feel that technological advances should carry health-hazard warnings).

As we have already noted, with agriculture and sedentism came diets too closely centered on a single crop, such as wheat in the Old World and maize in the New, and although sedentism (unlike hunting and gathering) encouraged population growth, such growth seems to have been that of a “forced” population with a considerably diminished nutritional status.

And more progress seems inevitably to have created more nutritional difficulties. The navigational and shipbuilding skills that made it possible for the Iberians to seek empires across oceans also created the conditions that kept sailors on a diet almost perfectly devoid of vitamin C, and scurvy began its reign as the scourge of seamen. As maize took root in Europe and Africa as well as in the U. S. South, its new consumers failed to treat it with lime before eating - as the Native Americans, presumably through long experi-ence, had learned to do. The result of maize in inexperienced hands, especially when there was little in the diet to supplement it, was niacin deficiency and the four Ds of pellagra: dermatitis, diarrhea, dementia, and death. With the advent of mechanical rice mills in the latter nineteenth century came widespread thiamine deficiency and beriberi among peoples of rice-eating cultures, because those mills scraped away the thiamine-rich hulls of rice grains with energetic efficiency.

The discovery of vitamins during the first few decades of the twentieth century led to the food “fortification” that put an end to the classic deficiency diseases, at least in the developed world, where they were already in decline. But other health threats quickly took their place. Beginning in the 1950s, surging rates of cancer and heart-related diseases focused suspicion on the environment, not to mention food additives such as monosodium glutamate (MSG), cyclamates, nitrates and nitrites, and saccharin. Also coming under suspicion were plants “engineered” to make them more pest-resistant - which might make them more carcinogenic as well - along with the pesticides and herbicides, regularly applied to farm fields, that can find their way into the human body via plants as well as drinking water.

Domesticated animals, it has turned out, are loaded with antibiotics and potentially artery-clogging fat, along with hormones and steroids that stimulate the growth of that fat. Eggs have been found to be packed with cholesterol, which has become a terrifying word, and the fats in whole milk and most cheeses are now subjects of considerable concern for those seeking a “heart-healthy” diet. Salt has been implicated in the etiology of hypertension, sugar in that of heart disease, saturated fats in both cancer and heart disease, and a lack of calcium in osteoporosis. No wonder that despite their increasing longevity, many people in the developed world have become abruptly and acutely anxious about what they do and do not put in their mouths.

Ironically, however, the majority of the world’s people would probably be willing to live with some of these perils if they could share in such bounty. Obesity, anorexia, and chronic disease might be considered tolerable (and preferable) risks in the face of infection stalking their infants (as mothers often must mix formulas with foul water); protein-energy malnutrition attacking the newly weaned; iodine deficiency (along with other mineral and vitamin deficiencies) affecting hundreds of millions of children and adults wherever foods are not fortified; and undernutrition and starvation. All are, too frequently, commonplace phenomena.

Nor are developing-world peoples so likely as those in the developed world to survive the nutritional disorders that seem to be legacies of our hunter-gatherer past. Diabetes (which may be the result of a “thrifty” gene for carbohydrate metabolism) is one of these diseases, and hypertension may be another; still others are doubtless concealed among a group of food allergies, sensitivities, and intolerances that have only recently begun to receive the attention they deserve.

On a more pleasant note, the chapters of Part V sketch out the history and culture of food and drink around the world, starting with the beginnings of agriculture in the ancient Near East and North Africa and continuing through those areas of Asia that saw early activity in plant and animal domestication. This discussion is followed by sections on the regions of Europe, the Americas, and sub-Saharan Africa and Oceania.

Section B of Part V takes up the history of food and drink in South Asia and the Middle East, Southeast Asia, and East Asia in five chapters. One of these treats the Middle East and South Asia together because of the powerful culinary influence of Islam in the latter region, although this is not to say that Greek, Persian, Aryan, and central Asian influences had not found their way into South Asia for millennia prior to the Arab arrival.

Nor is it to say that South Asia was without its own venerable food traditions. After all, many of the world’s food plants sprang from the Indus Valley, and it was in the vastness of the Asian tropics and subtropics that most of the world’s fruits originated, and most of its spices. The area is also home to one of our “superfoods,” rice, which ties together the cuisines of much of the southern part of the continent, whereas millet and (later) wheat were the staples of the northern tier. Asia was also the mother of two more plants that had much to do with transforming human history. From Southeast Asia came the sugarcane that would later so traumatize Africa, Europe, and the Americas; from eastern Asia came the evergreen shrub whose leaves are brewed to make tea.

Rice may have been cultivated as many as 7,000 years ago in China, in India, and in Southeast Asia; the wild plant is still found in these areas today. But it was likely from the Yangtze Delta in China that the techniques of rice cultivation radiated outward toward Korea and then, some 2,500 years ago, to Japan. The soybean and tea also diffused from China to these Asian outposts, all of which stamped some similarities on the cuisines of southern China, Japan, and Korea. Northern China, however, also made the contribution of noodles, and all these cuisines were enriched considerably by the arrival of American plants such as sweet potatoes, tomatoes, chillies, and peanuts - initially brought by Portuguese ships between the sixteenth century (China) and the eighteenth century (Japan).

Also characteristic of the diets of East Asians was the lack of dairy products as sources of calcium. Interestingly, the central Asian nomads (who harassed the northern Chinese for millennia and ruled them when they were not harassing them) used milk; they even made a fermented beverage called kumiss from the milk of their mares. But milk did not catch on in China and thus was not diffused elsewhere in East Asia. In India, however, other wanderers - the Aryan pastoralists - introduced dairy products close to 4,000 years ago. There, dairy foods did catch on, although mostly in forms that were physically acceptable to those who were lactose-intolerant - a condition widespread among most Asian populations.

Given the greater sizes of Sections C (Europe) and D (the Americas) in Part V, readers may object to what clearly seems to be something of a Western bias in a work that purports to be global in scope. But it is the case that foods and foodways of the West have been more systematically studied than those of other parts of the world, and thus there are considerably more scholars to make their expertise available. In most instances, the authors of the regional essays in both these sections begin with the prehistoric period, take the reader through the Neolithic Revolution in the specific geographic area, and focus on subsequent changes in foodways wrought by climate and cultural contacts, along with the introduction of new foods. At first, the latter involved a flow of fruits and vegetables from the Middle and Near East into Europe, and an early spice trade that brought all sorts of Asian, African, and Near Eastern goods to the western end of the Mediterranean. The expansion of Rome continued the dispersal of these foods and spices throughout Europe.

Needless to say, the plant and animal exchanges between the various countries of the Old World and the lands of the New World following 1492 are dealt with in considerable detail because those exchanges so profoundly affected the food (and demographic) history of all the areas concerned. Of course, maize, manioc, sweet potatoes and white potatoes, peanuts, tomatoes, chillies, and a variety of beans sustained the American populations that had domesticated and diffused them for a few thousand years in their own Neolithic Revolution before the Europeans arrived. But the American diets were lacking in animal protein. What was available came (depending on location) from game, guinea pigs, seafoods, insects, dogs, and turkeys. That the. American Indians did not domesticate more animals - or milk those animals (such as the llama) that they did domesticate - remains something of a mystery. Less of a mystery is the fate of the Native. Americans, many of whom died in a holocaust of disease inadvertently unleashed on them by the Europeans. And as the new land became depopulated of humans, it began to fill up again with horses, cattle, sheep, hogs, and other Old World animals.

Certainly, the addition of Old World animal foods to the plants of the New World made for a happy union, and as the authors of the various regional entries approach the present - as they reach the 1960s, in fact - an important theme that emerges in their chapters is the fading of distinctive regional cuisines in the face of considerable food globalization. The cuisine of the developed world, in particular, is becoming homogenized, with even natives of the Pacific, Arctic, and Subarctic regions consuming more in the way of the kinds of prepared foods that are eaten by everybody else in the West, unfortunately to their detriment.

Section E treats the foodways of Africa south of the Sahara, the Pacific Islands, and Australia and New Zealand in three chapters that conclude a global tour of the history and culture of food and drink. Although at first glance it might seem that these last three disparate areas of the planet historically have had nothing in common from a nutritional viewpoint, they do, in fact, share one feature, which has been something of a poverty of food plants and animals.

In Africa, much of this poverty has been the result of rainfall, which depending on location, has generally been too little or too much. Famine results from the former, whereas leached and consequently nitrogen-and calcium-poor soils are products of the latter, with the plants these areas do sustain also deficient in important nutrients. Moreover, 40 inches or more of rainfall favors proliferation of the tsetse fly, and the deadly trypanosomes carried by this insect have made it impossible to keep livestock animals in many parts of the continent. But even where such animals can be raised, the impoverished plants they graze on render them inferior in size, as well as inferior in the quality of their meat and milk, to counterparts elsewhere in the world. As in the. Americas, then, animal protein was not prominent in most African diets after the advent of sedentism.

But unlike the Americas, Africa was not blessed with vegetable foods, either. Millets, yams, and a kind of African rice were the staple crops that emerged from the Neolithic to sustain populations, and people became more numerous in the wake of the arrival of better-yielding yams from across the Indian Ocean. But it was only with the appearance of the maize, peanuts, sweet potatoes, American yams, manioc, and chillies brought by the slave traders that African populations began to experience the substantial growth that we still witness today.

Starting some 30,000 to 40,000 years ago, waves of Pacific pioneers spread out from Southeast Asia to occupy the islands of Polynesia, Melanesia, and Micronesia. They lived a kind of fisher-hunter-gatherer existence based on a variety of fish, birds, and reptiles, along with the roots of ferns and other wild vegetable foods. But a late wave of immigrants, who sailed out from Southeast Asia to the Pacific Basin Islands about 6,000 years ago, thoughtfully brought with them some of the products of the Old World Neolithic in the form of pigs, dogs, chickens, and root crops like the yam and taro. And somehow, an. American plant - the sweet potato - much later also found its way to many of these islands.

In a very real sense, then, the Neolithic Revolution was imported to the islands. Doubtless it spread slowly, but by the time the ships of Captain James Cook sailed into the Pacific, all islands populated by humans were also home to hogs, dogs, and fowl - and this included even the extraordinarily isolated Hawaiian Islands. Yet, as with the indigenous populations of the Americas, those of the Pacific had little time to enjoy any plant and animal gifts the Europeans brought to them. Instead, they began to die from imported diseases, which greatly thinned their numbers.

The story of Australia and New Zealand differs substantially from that of Africa and the Pacific Islands in that both the Australian Aborigines and (to a lesser extent) the New Zealand Maori were still hunter-gatherers when the Europeans first reached them. They had no pigs or fowl nor planted yams or taro, although they did have a medium-sized domesticated dog and sweet potatoes.

In New Zealand, there were no land mammals prior to human occupation, but there were giant flightless birds and numerous reptiles. The Maori arrived after pigs and taro had reached Polynesia, but at some point (either along the way to New Zealand or after their arrival) they lost their pigs, and the soil and climate of New Zealand did not lend themselves to growing much in the way of taro. Like their Australian counterparts, they had retained their dogs, which they used on occasion for food, and the sweet potato was their most important crop.

Thus, despite their dogs and some farming efforts, the Aborigines and the Maori depended heavily on hunting-and-gathering activities until the Europeans arrived to introduce new plant and animal species.

Unfortunately, as in the Americas and elsewhere in the Pacific, they also introduced new pathogens and, consequently, demographic disaster.

Following this global excursion, Part V closes with a discussion of the growing field of culinary history, which is now especially vigorous in the United States and Europe but promises in the near future to be a feast that scholars the world over will partake of and participate in.

Part VI is devoted to food - and nutrition-related subjects that are of both contemporary and historical interest. Among these are some examples of the startling ability of humans to adapt to unique nutritional environments, including the singular regimen of the Inuit, whose fat-laden traditional diet would seem to have been so perfectly calculated to plug up arteries that one might wonder why these people are still around to study. Other chapters take up questions regarding the nutritional needs (and entitlements) of special age, economic, and ethnic groups. They show how these needs frequently go unmet because of cultural and economic circumstances and point out some of the costs of maternal and child undernutrition that are now undergoing close scrutiny, such as mental decrement. In this vein, food prejudices and taboos are also discussed; many such attitudes can bring about serious nutritional problems for women and children, even though childbearing is fundamentally a nutritional task and growing from infancy to adulthood a nutritional feat.

A discussion of the political, economic, and biological causes and ramifications of famine leads naturally to another very large question treated in the first two chapters of Part VI. The importance of nutrition in humankind’s demographic history has been a matter of some considerable debate since Thomas McKeown published The Modern Rise of Population in 1976. In that work, McKeown attempted to explain how it happened that sometime in the eighteenth century if not before, the English (and by extension the Europeans) managed to begin extricating themselves from seemingly endless cycles of population growth followed by plunges into demographic stagnation. He eliminated possibilities such as advances in medicine and sanitation, along with epidemiological factors such as disease abatement or mutation, and settled on improved nutrition as the single most important cause. Needless to say, many have bristled at such a high-handed dismissal of these other possibilities, and our chapters continue the debate with somewhat opposing views.

Not entirely unrelated is a discussion of height and nutrition, with the former serving as proxy for the latter. Clearly, whether or not improving nutrition was the root cause of population growth, it most certainly seems to have played an important role in human growth and, not incidentally, in helping at least those living in the West to once again approach the stature of their Paleolithic ancestors. Moreover, it is the case that no matter what position one holds with respect to the demographic impact of nutrition, there is agreement that nutrition and disease cannot be neatly separated, and indeed, our chapter on synergy describes how the two interact.

Cultural and psychological aspects of food are the focus of a group of chapters that examines why people eat some foods but not others and how such food choices have considerable social and cultural resonance. Food choices of the moment frequently enter the arena of food fads, and one of our chapters explores the myriad reasons why foods can suddenly become trends, but generally trends with little staying power.

The controversial nature of vegetarianism - a nutritional issue always able to trigger a crossfire of debate - is acknowledged in our pages by two chapters with differing views on the subject. For some, the practice falls under the rubric of food as medicine. Then there are those convinced of the aphrodisiacal benefits of vegetarianism - that the avoidance of animal foods positively influences their sexual drive and performance. For many, vegetarianism stems from religious conviction; others simply feel it is wrong to consume the flesh of living creatures, whereas still others think it downright dangerous. Clearly, the phrase “we are what we eat” must be taken in a number of different ways.

The closing chapters of Part VI address the various ways that humans and the societies they construct have embraced particular foods or groups of foods in an effort to manipulate their own health and wellbeing as well as that of others. Certain foods, for example, have been regarded by individuals as aphrodisiacs and anaphrodisiacs and consumed in frequently heroic efforts to regulate sexual desires. Or again, some - mostly plant - foods have been employed for medicinal reasons, with many, such as garlic, viewed as medical panaceas.

Part VII scrutinizes mostly contemporary food-related policy questions that promise to be with us for some time to come, although it begins with a chapter on nutrition and the state showing how European governments came to regard well-nourished populations as important to national security and military might. Other discussions that follow treat the myriad methodological (not to mention biological) problems associated with determining the individual’s optimal daily need for each of the chief nutrients; food labeling, which when done fairly and honestly can aid the individual in selecting the appropriate mix of these nutrients; and the dubious ability of nonfoods to supplement the diet.

As one might expect, food safety, food biotechnology, and the politics of such issues are of considerable concern, and - it almost goes without saying - politics and safety have the potential at any given time for being at odds with one another. The juxtaposition is hardly a new one, with monopoly and competitive capital on the one hand and the public interest on the other. The two may or may not be in opposition, but the stakes are enormous, as will readily be seen.

First there is the problem of safety, created by a loss of genetic diversity. Because all crops evolved from wild species, this means that in Darwinian terms, that the latter possessed sufficient adaptability to survive over considerable periods of time. But with domestication and breeding has come genetic erosion and a loss of this adaptability - even the loss of wild progenitors - so that if today many crops were suddenly not planted, they would simply disappear. And although this possibility is not so alarming - after all, everyone is not going to cease planting wheat, or rice, or maize - the genetic sameness of the wheat or the maize or the rice that is planted (the result of a loss of genetic material) has been of some considerable concern because of the essentially incalculable risk that some newly mutated plant plague might arise to inflict serious damage on a sizable fraction of the world’s food supply.

There is another problem connected with the loss of genetic material. It is less potentially calamitous but is one that observers nevertheless find disturbing, especially in the long term. The problem is that many crops have been rendered less able to fend off their traditional parasites (in part because of breeding that reduces a plant’s ability to produce the naturally occurring toxicants that defend against predators) and thus have become increasingly dependent on pesticides that can and do find their way into our food and water supplies.

Genetic engineering, however, promises to at least reduce the problem of chemical pollution by revitalizing the ability of crops to defend themselves - as, for example, in the crossing of potatoes with carnivorous plants so that insects landing on them will die immediately. But the encouragement of such defense mechanisms in plants has prompted the worry that because humans are, after all, parasites as far as the plant is concerned, resistance genes might transform crops into less healthy or even unhealthy food, perhaps (as mentioned before) even carcinogenic at some unacceptable level. And, of course, genetic engineering has also raised the specter of scientists accidentally (or deliberately) engineering and then unleashing self-propagating microorganisms into the biosphere, with disastrous epidemiological and ecological effect.

Clearly, biotechnology, plant breeding, plant molecular and cellular biology, and the pesticide industry all have their perils as well as their promise, and some of these dangers are spelled out in a chapter on toxins in foods. But in addition, as a chapter on substitute foods shows, although these substitutes may have been developed to help us escape the tyranny of sugars and fats, they are not without their own risks. Nor, for that matter, are some food additives. Although most seem safe, preservatives such as nitrates and nitrites, flavor enhancers like MSG, and coloring agents such as tartrazine are worrisome to many.

As our authors make clear, however, we may have more to fear from the naturally occurring toxins that the so-called natural foods employ to defend themselves against predators than from the benefits of science and technology. Celery, for example, produces psoralins (which are mutagenic carcinogens); spinach contains oxalic acid that builds kidney stones and interferes with the body’s absorption of calcium; lima beans have cyanide; and the solanine in the skins of greenish-appearing potatoes is a poisonous alkaloid.

From biological and chemical questions, we move to other problems of a political and economic nature concerning what foods are produced, what quantities are produced, what the quality is of these foods, and what their allocation is. In the United States (and practically everywhere else) many of the answers to such questions are shaped and mediated by lobbying groups, whose interests are special and not necessarily those of the public. Yet if Americans sometimes have difficulty in getting the truth about the foods they eat, at least they get the foods. There is some general if uneasy agreement in America and most of the developed world that everyone is entitled to food as a basic right and that government programs - subsidies, food stamps, and the like - ought to ensure that right. But such is not the situation in much of the developing world, where food too frequently bypasses the poor and the powerless. And as the author of the chapter on food subsidies and interventions makes evident, too often women and children are among the poor and the powerless.

To end on a lighter note, the last chapter in Part VII takes us full circle by examining the current and fascinating issue of the importance of Paleolithic nutrition to humans entering the twenty-first century.

We close this introduction on a mixed note of optimism and pessimism. The incorporation of dwarfing genes into modern plant varieties was responsible for the sensationally high-yielding wheat and rice varieties that took hold in developing countries in the 1960s, giving rise to what we call the “Green Revolution,” which was supposed to end world hunger and help most of the countries of the world produce food surpluses. But the Green Revolution also supported a tremendous explosion of populations in those countries it revolutionized, bringing them face to face with the Malthusian poles of food supply and population.

Moreover, the new plants were heavily dependent on the petrochemical industry for fertilizers, so that in the 1970s, when oil prices soared, so did the price of fertilizers, with the result that poorer farmers, who previously had at least eked out a living from the land, were now driven from it. Moreover, the new dwarfed and semidwarfed rice and wheat plants carried the same genes, meaning that much of the world’s food supply was now at the mercy of new, or newly mutated, plant pathogens. To make matters worse, the plants seemed even less able to defend themselves against existing pathogens. Here, the answer seemed to be a still more lavish use of pesticides (against which bitter assaults were launched by environmentalists) even as more developing-world farmers were being driven out of business by increasing costs, and thousands upon thousands of people were starving to death each year. Indeed, by the 1980s, every country revolutionized by the Green Revolution was once again an importer of those staple foods they had expected to produce in abundance.

Obviously, from both a social and political-economic as well as a biological viewpoint, ecologies had not only failed to mesh, they had seriously unraveled. However, as our earlier chapters on rice and wheat point out, new varieties from plant breeders contain variations in genes that make them less susceptible to widespread disease damage, and genetic engineering efforts are under way to produce other varieties that will be less dependent on fertilizers and pesticides.

Meanwhile, as others of our authors point out, foods such as amaranth, sweet potatoes, manioc, and taro, if given just some of the attention that rice and wheat have received, could help considerably to expand the world’s food supply. But here again, we teeter on the edge of matters that are as much cultural, social, economic, and political in nature as they are ecological and biological. And such matters will doubtless affect the acceptance of new crops of nutritional importance.

As we begin a sorely needed second phase of the Green Revolution, observers have expressed the hope that we have learned from the mistakes of the first phase. But of course, we could call the first flowering of the Neolithic Revolution (some 10,000 years ago) the first phase and ponder what has been learned since then, which - in a nutshell - is that every important agricultural breakthrough thus far has, at least temporarily, produced unhappy health consequences for those caught up in it, and overall agricultural advancement has resulted in growing populations and severe stress on the biosphere. As we enter the twenty-first century, we might hope to finally learn from our mistakes.

The Editors

PART I

Determining What Our Ancestors Ate

About 10,000 years ago, humans started changing the way they made a living as they began what would be a lengthy transition from foraging to farming. This transformation, known as the Neolithic Revolution, was actually comprised of many revolutions, taking place in different times and places, that are often viewed collectively as the greatest of all human strides taken in the direction of progress. But such progress did not mean better health. On the contrary, as the following chapters indicate, hunter-gatherers were, on the whole, considerably better nourished and much less troubled with illnesses than their farmer descendants. Because hunter-gatherers were mobile by necessity, living in bands of no more than 100 individuals they were not capable of supporting the kinds of ailments that flourished as crowd diseases later on. Nor, as a rule, did they pause in one spot long enough to foul their water supply or let their wastes accumulate to attract disease vectors - insects, rodents, and the like. In addition, they possessed no domesticated animals (save the dog late in the Paleolithic) who would have added to the pollution process and shared their own pathogens.

In short, hunter-gatherers most likely had few pathogenic boarders to purloin a portion of their nutritional intake and few illnesses to fight, with the latter also sapping that intake. Moreover, although no one questions that hunter-gatherers endured hungry times, their diets in good times featured such a wide variety of nutriments that a healthy mix of nutrients in adequate amounts was ensured.

Sedentism turned this salubrious world upside down. Because their livelihood depended on mobility - on following the food supply - hunter-gatherers produced relatively few children. By contrast, their sedentary successors, who needed hands for the fields and security in old age, reproduced without restraint, and populations began to swell. Squalid villages became even more squalid towns, where people lived cheek to jowl with their growing stock of animals and where diseases began to thrive, along with swarms of insects and rodents that moved in to share in the bounty generated by closely packed humans and their animals.

But even as pathogens were laying an ever-increasing claim to people’s nutritional intake, the quality of that intake was sharply declining. The varied diet of hunter-gatherers bore little resemblance to the monotonous diet of their farmer successors, which was most likely to center too closely on a single crop such as wheat, millet, rice, or maize and to feature too little in the way of good-quality protein.

The chapters in Part I focus on this transition, the Neolithic revolutions, which although separated in both time and space, had remarkably similar negative effects on human health.



 

html-Link
BB-Link