Article 3: The Sustainable Kitchen

Organic freerange egg In writing my two previous articles (part 1, part 2) on the economics of sustainability, I have come across a concept I have found quite intriguing, the “sustainable kitchen.” What drew me to this concept is the notion that sustainability would be achievable if we were to consume locally, organically produced food. The idea is the production of organically produced foods involves protecting natural resources and conserving biodiversity, and is therefore sustainable. Moreover, by consuming only locally produced foods, we not only incentivize local producers to produce, but we also reduce the need for preservatives. While the sustainable kitchen concept has it merits, is it economically viable?

Mass producers of fruits and vegetables tend to use synthetic chemical fertilizers to promote plant growth, insecticides for pest control and herbicides for weed control. Along the same line, mass meat producers largely use hormones to spur growth and antibiotics to prevent disease in livestock. By contrast, OPF producers fertilize the soil with manure and/or compost, control pests by introducing beneficial insects and birds, along with controlling weeds by means of crop rotation, hand-weeding and/or mulching. Likewise, producers of organically produced meats and meat products tend to use measures such as rotational and free range feeding and less dense living spaces.

Sure, not all so-called organic farming is sustainable. Increase in the demand for organically grown foods has spurred on the establishment of large operations, which precludes consumption of locally grown foods (well, unless you happen to reside close to one or more of such operations). Mass produced foods, regardless of whether they are organically or petrochemically grown, also produce toxic bi-products. Therefore, we should be certain to clearly define sustainable organic food production. Sustainable organic farming is an ecological approach that attempts, as much as possible, to replace the carbon, nitrogen, water, and micronutrients that are depleted from the soil by means of renewable resources. Furthermore, sustainable organic growing is labor-intensive. Petrochemical and organic mass production agriculture was developed to replace labor with machinery. In areas of the world where the cost of labor is higher than the cost of machinery, such as the United States, it is more economical to shift away from labor intensive to machine intensive (capital intensive) production.

In the eyes of the typical consumer, the biggest differences between organic and conventional food is the price tag. At the grocery store, we will usually notice organically grown foods are much pricier than petrochemically grown foods. However, when we dig deeper we come to realize that the true cost of food is not necessarily the listed price. The price tag we see at the grocery store does not include the externalities incurred due to unsustainable food production. Externalities, as the term suggests, are costs external to production and consumption. They are costs usually not incurred by private producers rather they are incurred by consumers. Externalities can be direct such as the additional amount levied on our water bills to cover the cost of detoxifying our drinking water of agricultural chemical run-off and residue.

In many cases externalities are not as direct. Soil, air and water pollution for example, do not directly affect our bank accounts. Externalities tend to be the type of costs that cannot be quantified. The stench of an Iowan pig farm cannot be readily remedied. And, how much do you charge the mass producing pig farmer for blasting your nostrils? What do we charge the neighboring farmer who uses Monsanto seeds for cross-pollination, and therefore polluting our crops? What is the price associated with the effects of bovine waste run off entering our rivers from mass produced beef? Well, eventually, we move out of Iowa or get use to the stink. Monsanto will send their lawyers after us for illegally obtaining their patented, mutated, Frankenstein, seeds—and if we happen to live in the Third World, the price of losing the lawsuit would probably mean losing our land (suck it, Bono). And, if or when a crisis point is reached we will end up diverting our tax dollars from other uses to clean up the mass produced bovine sewage in our rivers.

However, these are not expenditures or the effects we readily see as consumers. These are long term effects and most humans tend to concentrate on the immediate, the direct and the near term. We can conceptualize what the long-run will look like, but as John M. Keynes said “[i]n the long-run we are all dead.” What I am getting at here is that the price the consumer sees at the grocery store is the price that affects behavior. This is especially evident among low wage earners, given they are not as much concerned about bio-loads and petrochemical toxicity when having to make choices about feeding themselves and their families on a very constrained budget. Expending the extra money to purchase locally and organically produced foods would mean less disposable income available for other essentials like housing, clothing and transportation.

What about wage earners in higher brackets? Why don’t they embrace the sustainable kitchen? Is it ignorance? Perhaps it is, in a very small way. Instead, it is more likely related to demand and supply. By and large, humanity has moved toward global mass food production and has seen increased variety coupled with lower costs to the individual consumer. Going back to my first article, people trade because, among other things, they want to consume a variety of goods and services. That variety often extends beyond the locality in which consumers reside. A Minnesotan would be hard pressed to find a reasonably priced, locally produced, bunch of romaine lettuce in the middle of January. It is the demand for out of area and out of season goods that signal suppliers to produce such goods. And, the larger the aggregate demand for such goods the higher the likelihood that mass production approaches will be employed to meet demand. Therefore, for the sustainable kitchen concept to take off we need a cultural change as economics, especially in this point in time, will do very little to affect consumption.

We are already seeing a cultural shift, albeit glacial, in consumption patterns. The marketing gurus working for the organic food industry have been doing a very good job of convincing members of the public that organically grown foods are more flavorful and have higher nutritional value. Whether or not this is actually true, especially since the evidence is mixed, is immaterial. What matters regardless of how it happens is, that we see a decline in unsustainable production techniques. Actually, we have seen increasing demand for organically grown foods in spite of the large price tag. Consumers, especially among the higher wage earners, see organically grown goods to be superior. So, the food they consume may be organically grown but not necessarily locally grown. But the question remains, is the organically produced food being consumed sustainably produced? Furthermore, is there a way to design sustainable, organic production so that organically produced goods can become affordable to low wage earners?

Article by: Andre H. Baksh, PhD.

i If you are interested in exploring sustainable kitchen recipes, I recommend The Sustainable Kitchen: Passionate Cooking Inspired by Farms, Forests and Oceans by Stu Stein and Judith Dern.

ii If you are seeking to participate in producing for the sustainable kitchen, take a look at Sustainable Market Farming: Intensive Vegetable Production on a Few Acres by Pam Dawling.

iii You need a wake-up call? Not at all convinced petrochemically based food production is all that harmful? You can start by taking a look at the other side of the argument in Organic Manifesto: How Organic Farming can Heal our Planet, Feed the World, and Keep us Safe by Maria Rodale and Eric Scholsser.

African-American influences in American Cuisine: Harvest Festival at North Carolina’s, Stagville

20130907_104622Over the weekend, we had the pleasure of attending the Stagville Harvest Festival, the brainchild of Orange County craftsman, Jerome Bias and hosted by Afro-culinary expert and food historian, Michael Twitty. Stagville is what remains of the Bennhan-Cameron family holdings dating back to antebellum North Carolina. It is estimated that some 900 slaves resided, worked, lived, passed through, and died on this 30,000 acre plantation. While Michael Twitty presented the intriguing barbecuing methods, using branches from trees set over hot embers in order to cook pork shoulders and spare ribs, African- American volunteers, including Clarissa Clifton and Nicole Moore, also struck a chord with their historical accounts of long forgotten and often labor intensive cooking techniques.

20130907_105717Donned in the customary attire of the antebellum era, each volunteer navigated the fires carefully, remaining upbeat and informative, though a warranted air of frustration loomed with the less than perfect setting of a 19th century method of cooking. There were two fire pits prepared for the event, one for the main barbecue and the other for side dishes. The upkeep of the flame was of utmost importance, as this was their sole heat source for cooking. Any slight breeze or break in the fire would require these individuals to constantly rotate burning wood and cast iron pots to generate the necessary heat for cooking. At several instances, the hem of an ankle length dress loomed dangerously close to the flame. There were shouts of warning, indicating that this sort of cooking, which required stepping into the pit and across the flames, was a dangerous occupation for enslaved women. In fact, during the cooking demonstrations, one volunteer admitted that a leading cause of death of female slaves included perishing after their dresses caught fire in similar fire pits. Of course, this is in addition to other causes of death such as childbirth, malnutrition, diseases like measles and typhus, and long, laborious work days.

20130907_144641Without much fuss over the ruination of dresses and potential third degree burns, the meal preparation went on. The ingredients were simple and included, ox tails, diced chuck roast, chicken, fresh herbs, flour for dredging, sweet potatoes, green beans, okra, tomatoes, bacon, a little salt, and lots and lots of lard. Why lard? According to volunteer Clifton, “Lard is much more flavorful than regular vegetable shortening. Without lard, we tend to over salt our food.” In times of limited salt procurement, lard served to flavor otherwise bland dishes. She later explained that lard was also the fat of choice for breads and pies as it produced tall, fluffy biscuits and perfectly flaky crusts. Another intriguing technique involved the method for rapidly heating water. An old cannonball, strapped to a chain was placed onto the hot embers. Left there for several hours, it becomes so hot that simply dunking it into a pot of water, heats it within seconds. The water bubbles vigorously as the pot is placed into the fire, ready to receive chopped vegetables (in this case, sweet potatoes) for a shortened cooking time. In addition to larding and hot cannonballs, this event raised awareness of the antebellum use of farm to table. Every portion of the animal was utilized. Vegetables and herbs grown in gardens were added to the pot and cooked over an open flame using wood from nearby trees. Short cuts were taken to stretch the meal, and as pointed out by Clifton, flour was often hard to come by and in order to produce a large output of freshly baked biscuits, boiled and mashed root vegetables were added to the flour, which provided much of the body of the biscuits.

The idea of recipes remained non-existent and largely an intuitive process passed from one generation to another. This meal was certainly not indicative of a feast in antebellum African-American circles. While they did construct these dishes to feed others, this level of protein and carbohydrates was often missing from the diets of enslaved people. As Twitty explained, most days slaves ate a corn mush (relatively similar to the West African dish ugali or kenkey, the latter a fermented variant); a simple porridge made from mashed corn kernels and water. This was sometimes accompanied by salted pork, not unlike the provisions available to Caribbean slaves, and perhaps on rare occasion supplemented by wild game such as rabbit, opossum, and raccoon. According to Clifton, sometimes slaves could obtain a small portion of land for growing their own vegetable gardens. They would need the permission of plantation owners as cultivation certainly required extra time, which could affect productivity on the plantation especially after a 16 hour work day. However, there was one catch. If a large event such as a wedding party or holiday function was to take place, the plantation owner could, and often did confiscate all of the produce in the gardens for their use. This left the idea of food and nutrients a well controlled and illusive product which often slipped through the hands of the enslaved. Many were utilized for their skills to prepare food. Some were brought in to plantations to barbecue, while others were sought out to prepare such things as pies, biscuits, stews, roasts, and side dishes.

20130907_171429The process of preparing a large meal was an arduous one. Plantation era barbecue required days of preparation. The fire was built a day or two before cooking allowing the coals to burn down to a low temperature, creating a smoky environment that turns tough cuts of pork into tender morsels. Ingredients were then gathered, logs split, and hogs slaughtered and butchered. At this Harvest Festival most of the dishes were cooked throughout the day. The sweet potato biscuits required that the sweet potatoes be boiled until tender, mashed together with flour and salt, and then rolled out by hand before being placed into heavy cast iron pots to bake in the large firepits. Heaps of lard was melted down in similar pots to fry chicken dredged in flour and herbs. This created a potentially volatile component to an outdoor flame, all of which was handled quite expertly by the volunteer staff.

A number of presentations were given before dinner was served. What makes Stagville standout as a historic site are the slave quarters built in 1850. These still standing, two story buildings are not typical of slave housing that once existed throughout the south, as these are large structures that served to house up to four families each. The four buildings provided a single room with a hearth for one family. Each small room accommodated approximately seven people. In some instances the number was considerably higher.

20130907_145034The chimneys on each side of the buildings were constructed from bricks made in the fall of 1850 by enslaved brick makers. During a single month, some 100,000 bricks were produced. As was pointed out to the gathered crowd learning about these constructions, the bricks had dried in the sun prior to being fired in the kiln. Though, not all had dried completely when moved, leaving fingerprints of their makers on the surface, an indelible mark of the antebellum period.

In addition to the various presentations, there was also a special announcement. Michael Twitty had recently undergone genetic testing by a company called African Ancestry, that traced his familial roots to Africa. The results of this test, still unknown at the time to Mr. Twitty, were announced before dinner commenced. This served rather dramatically to illustrate the purpose of the entire event. The process of reconstruction never ended as those affected by slavery, particularly its disconnection of lineage, language, and culture, seeking to trace their roots backwards. This reconstruction of heritage through food is keeping the history, not only of the African-American community alive, but that of the United States itself.

20130907_171535By dinner time, with the nearly 70 guests gathered, Twitty and the other volunteers worked to finish off all the various dishes which included the barbecue ribs and pork shoulders that had been cooking all day. In addition there were sweet potato biscuits, green beans, and a wide variety of dishes inspired by the slavery period. As Twitty pointed out, such a feast would have been uncommon, but the dishes were authentic in flavor and ingredients, and represented many of the foods that would have been served in North Carolina 200 years ago.

It is easy to assume that meal preparation in the days before mass market grocery stores was limited in flavor. While labor intensive, and difficult without all the modern conveniences we so adeptly take for granted the cooks of the antebellum south who included a wide range of ingredients to their dishes depending on availability. Certainly, a simple corn mush or flour and water biscuits didn’t excite the palate, though a wide range of locally cultivated herbs were available. Herbs offered not only flavor, but medicinal properties important to people with no health care, who had to innovate virtually everything from resources at hand. Nothing went into a dish that didn’t add flavor and as we sat down for dinner that fact was abundantly clear.

Events such as the one at Stagville are important in preserving cultural history for all individuals involved. It was quite interesting to witness the innovation of techniques and flavors of dishes we often serve in our homes and at barbecues, today. While these are proto-types of modern biscuits, fried chicken, or barbecued pork, they make up the foundation of flavors expressed in modern cuisine. Often times, we forget the hardships involved in procuring and producing foods. This was a celebration of flavor brought forth from a tumultuous period in time. However, food is the cornerstone of our survival, an expression of comfort, and the tool by which conversations are started. It is through food preparation that stories are told and in the act of eating that we learn of each others’ experiences and find similarities between historically relegated divergent populations, by way of the palate.

Authored by: Sabrina S. Baksh, MA and Derrick Riches,

Article 2: Innovation and the Economics of Sustainability

We may have a tendency to conceptualize sustainable living as something static, something unchanging. However, technical innovation brings a dynamic perspective to sustainability. Remember, sustainability pertains not only to seeking a desirable quality of life for ourselves but also ensuring an equal or better quality of life for future generations. This implies that with each succeeding generation the quality of life must be the same or better than the generation before it. We cannot have a case where your parents enjoy the benefits of certain advancements but your generation is left to live a life equal to that of your grandparents. On the other hand, if a catastrophic event were to lower your parents’ generation’s quality of life in comparison to your grandparents’ generation’s, but your generation lives a quality of life equal to your parents’ then by definition your quality of life is sustainable. In this way, we can say life becomes unsustainable only if there is continuous suffering from generation to generation through time. Therefore, when we speak of sustainability our concern is whether the actions of one generation maintain or even improve the quality of life in not only the next generation but all the generations that follow. But, as discussed in my first article, humans are not endowed with the foresight to know whether or not our “desirable quality of life” will or will not adversely affect future generations. What we do have is the ability to infer from the past in order to reduce the uncertainty in our decision making and, therefore, reduce our opportunity losses. We have seen centuries of advancing knowledge and technical innovation responsible for continuously improving the quality of life over time. We have also seen technical innovation responsible for diminishing the quality of life.
Somewhere in the remote past human beings functioned at a subsistence level (or at least that’s what anthropologists tell us). Subsistence living means people produce and exploit the available resources for themselves, their families, their tribes to survive. People had rudimentary tools with which they hunted, gathered and cultivated. As long as resources were readily available they were able to experience a consistent standard of living from one generation to next, with very little technical advancements or innovations. However, as resources became more scarce, human beings were driven to develop more advanced tools and implements. Game, fish, and wild berries abundantly available only a few generations prior became depleted, perhaps because more and more humans had come to use the limited set of resources. Finally some human (or humans) domesticated animals, which meant they were less dependent on hunting for meat. They invented the plow with which they were able to cultivate previously uncultivable land. They also came up with methods of food storage and preservation, such as dried, salted and pickled flora and fauna.
Somewhere along the line they figured out that the clan from over the hill was able to produce a better yield of corn, or the tribe across the river was able to raise tastier goats than their own. They also found that their honey bees produce more honey than can be consumed locally, a surplus. Selectively breeding livestock to produce more milk or meat as well as producing high yield seed strains increased the abundance of food in each locality to the point where they had surpluses to trade. But trade was not an option if one of the clans were not able to produce a surplus of anything tradable. What are such localities to do? Typical options were to raid other localities and take home whatever spoils they could carry or to invade, impose rule over the defeated clan and acquire their specialty. However, when tradable surpluses were available, localities involved in trade were likely to see improvement their general quality of life. Why? Let’s take the goat herders as an example. Sure, they ate delicious goat kebabs but their grains were unappetizing and they never tasted honey. With trade they were able to eat fine quality goat kebabs, grains and honey cakes. The never ending choice having to be made throughout human history: compete or cooperate.
In order to support and improve trade, humans continued to develop more efficient means of transportation. They built bigger and stronger boats and acquired knowledge of things nautical. Those groups who were able to realize the concept of the wheel were able to travel longer distances over land. Of course, over land trade was made even more efficient with the establishment of trails and eventually the building of roads. They also had to develop methods of securing their goods against spoilage, which led to continual improvements in storage and preservation.

Early History
Technical innovation in commodity production, transportation, and storage are three factors we see throughout human history. Storage and spoilage played a major factor in what was traded. Commodities such as meat from freshly slaughtered animals or fresh fruits and vegetables were more likely traded to consumers residing close to the marketplace. Hardier commodities such as grains and spices were slated for long distance trade. Roman and Greek merchants, for instance, aided by the trade winds, sailed from the Arabian Gulf to India to trade European metals, especially gold, for Indian spices and Chinese silk. The spice trade continued to be lucrative after the fall of the Roman Empire. First, the Arabs and Turks dominated the trade until the Portuguese cannon put an end to that. The Portuguese stranglehold was finally destroyed by Britain’s nautical mastery. Compete for the power to force cooperation.
In each of these instances we see where technical innovation played an important role in who controlled the trade. The Romans and Greeks used their advanced galleys to venture across the Indian Ocean. The Arabs and Turks used the dhow, the first pure sailing vessel, to sail along the coast of the Indian Ocean. Later the Portuguese showed up with their mighty man o’ war, a heavily armed warship, as the major player in 1505. This lasted until the East India Company and their “ships of the line” moved the Portuguese to a minor role.
Technical innovation has been shown to promote sustainability but also to destroy it. A combination of battle tactics and weapon superiority allowed the Romans to destroy Carthage and Thebes. The Carthaginian and Corinthian quality of life were certainly not improved after Scipio Aemilianus and Mummius, respectively, enslaved both populations. The quality of life in Baghdad, Kiev, Herat, and Samarkand became unsustainable after Genghis Khan and his Mongol armies, with the aid of the composite bow, slaughtered much of the population in each of those cities. Advancements in ballistics and air weaponry coupled with ensuing disease outbreaks and starvation led to the loss of between 60 and 85 million lives in World War II. Wars can be considered sudden catastrophic events no different from earthquakes. Wars, earthquakes, volcanoes, floods, droughts etc., can completely destroy a civilization in some cases. In other cases, surviving generations recover and continue to progress, and at times even surpass their ancestors’ quality of life. So, unless the event leads to extinction or each generation thereafter suffers incrementally, technically speaking, if succeeding generations face the same level of suffering then we can say life is once again sustainable.

The Industrial Revolution and into the Modern Era
Over the past two centuries, with the advent of the steam engine and the combustion engine humanity not only vastly improved the means of water and over-land transportation but also developed and advanced air transportation. The sail ship gave way to the steam ship, driven by coal, which in turn gave way to the petroleum driven ship. Overland travel once dependent on oxen, mules, and horses was taken over by the railroad and now primarily the automobile. Air travel has allowed people and goods to be shipped from one locality to the next in a matter of hours.
The tremendous advances made in transportation over the last two centuries have shrunk the globe. Better transportation means we can acquire goods from faraway places in less time. It allows areas around the planet to focus their resources in producing fewer and fewer things. The Union Pacific and Central Pacific Railroads joined at Promontory Point, Utah connecting the east with the west coast of the United States. But it was the invention of the refrigerated rail car that allowed beef harvested in the western United States to be shipped to the meat packing plants located in the eastern part of the country. Before this, there were great cattle drives, which reduced the amount of meat that finally reached the final destination.
Another innovation in storage and preservation, the tin can followed by the aluminum can, completely revolutionized food production. Canning allowed perishable goods to reach distant places all over the world. Canned foodstuffs were initially luxury items. The British in India got a reprieve from Indian cuisine by enjoying canned mutton, canned peas and so forth, imported from Britain. Technological advances allowed for cheaper and, therefore, the mass production of canned goods. With canning, food producers increased production to meet demand. Large investments were made in the fishing industry to increase hauls, which were not made before because of spoilage. Fish and meat, like relatives, become more and more unpleasant with each passing hour. Canneries were built and maintained all along many coastal areas to process the day’s catch. Canned herring, sardines, and kippers lined shelves in general stores. Also lining the selves were canned vegetables and fruits harvested in places far away.

The Modern Era
Coming into the 20th century we see further innovation in transportation along with storage and preservation (refrigeration). Railcars, by the end of the century were mainly relegated to moving goods and not people. The automobile became the preeminent means of land transportation. Humans now traveled longer distances to work and conduct business in their cars, driven on modernized interstate and local highways. Growing networks of roads allowed semi-trailers to transport goods to even more far reaching localities. It is not only that goods are reaching more people which is important but also what type of goods. Prior to the advancements made in refrigeration, fresh meats and produce were more or less locally raised, limited in supply and therefore expensive. With technical innovation, the masses are able to consume Australian lamb and Chilean sea bass without canning. And with each passing generation the masses have become less and less inclined to eat a variety of canned foods and more and more inclined to demand fresh foods. The problem is, even with advancements in preservation, fresh foods spoil at a much more rapid rate than canned or frozen foods. Is this path sustainable?
Yes, advancements in transportation, storage and production have led to better standards of living. But there is another side to the story yet unexplored, namely pollution and resource exhaustion. Humans have been polluting the environment from the time they figured out how to build cities. Historically, sewage has been a pressing problem. Deforestation and overpopulation changed the North African grasslands and woodlands to the now famous Sahara. What is occurring today is happening on a global scale as mass consumption continues to increase. We see deforestation in Brazil occurring to make more grazing land available for cattle, which is prompted by growing worldwide demand for beef. Deforestation in rainforest regions can have disastrous consequences. Even more so than in temperate regions of the globe, cutting down the trees in a rainforest quickly leads to soil erosion and a buildup of silt in rivers and streams. Soil erosion means less arable land to grow the grass used to feed the beef cattle. Therefore, more of the forest is cleared to make grazing land available. Land is limited in supply, which indicates that this means of production is certainly unsustainable.
Problems related to raising beef cattle are not localized to rainforests only. In North America, manure runoff, mostly from melted snow, adversely impact fish in lakes and streams. Why? Manure contains ammonia which is very toxic to fish. Furthermore, manure provides excessive amounts of nutrients to bodies of water, which spur on algae blooms. Algae uses dissolved oxygen to fuel night-time growth, when sunlight is unavailable for photosynthesis. When dissolved oxygen falls beneath certain levels, fish and other aquatic fauna can no longer survive. On the surface this may not matter since, in the United States fresh water fish is not really a desired commodity compared to beef. This choice is not a difficult one to make: which would you prefer, prime rib or trout? As such, we can say Americans are sacrificing one food source for another. But in making this sacrifice they allow for increasingly intensive pollution of their water supply, their drinking water supply. To state the obvious, this is certainly not sustainable.
Americans may not consume as much freshwater fish per capita as Southeast Asians but they certainly do show a desire for sea food. Tons of shrimp, crabs, halibut, salmon, and pollock are gobbled up annually. Advanced technology allows North America, Europe, Australia, and Japan to increase hauls from the seas and oceans with each passing year. And with each passing year the likelihood of replenishment falls as marine habitats are destroyed and areas are over-fished and over-crabbed. When was the last time anyone had a crab boil with fully grown blue crabs? As such, declining yields has spurred on the outspread of aquiculture.
Aquiculture pertains to fish, shrimp, mollusk, and other types of aquatic farming. Aquiculture is not new to humanity. The Egyptians, for instance, fish farmed tilapia in ponds for consumption. In the modern world, however, we have moved beyond using only ponds. Through modern technology, we are now able to construct and maintain ocean enclosures such as the salmon farms seen in southern Chile. For fish farming to be profitable versus wild caught fishing, fish-farms are incentivized to maximize the number of fish they can raise within each enclosure. As the number of fish packed into an enclosed area increase so is the likelihood of diseases. Therefore, antibiotics are liberally dispensed. Antibiotic overuse leads to the development of resistant strains of bacteria which can affect not only fish but also humans who consume the flesh of the antibiotic treated fish. As resistant strains of bacteria emerge, we are compelled to develop stronger and stronger antibiotics. Furthermore, large numbers of fish living in very limited spaces produce large volumes of concentrated, waste. Commonly used anti-fouling agents, such as tributylin, which breaks down waste are often toxic and can even be carcinogenic.
We should also keep in mind that the adverse effects of fish farming are not localized to only the fish within the enclosures. These enclosures are not completely isolated, self contained structures but are more like cages which allow the ocean currents to flow in and out. Thus, any treatment conducted within a farm is very likely to flow to the outside marine environment. Wild fish and other marine animals located close to the enclosures are most likely to be effected. Resistant strains of bacteria can be devastating to wild populations as it is highly unlikely they will be able to receive treatment with stronger and stronger antibiotics. In spite of the claim that most of the fish and marine life we consume comes from farms, a sizable portion of this food source is wild caught. Besides, the current culinary trend in the developed world is to eat wild caught sea food. So, overfishing led to the establishment of fish-farms and all their warts, which led to the preference for wild fish, which are being over fished. Insanity, pure insanity, and not sustainable.
Oh, don’t think I have let the vegetarians off the hook. Increasing demand for and mass consumption of fresh fruits and vegetables has prompted farmers to push production into overdrive. Over-farming leads to soil degradation and water depletion. The activity that leads to increased production in the present is the same one that will destroy it in the future. Soil erosion affects productivity because it removes the top soil, which contains most of the organic matter and plant nutrients. Furthermore, it takes about 300 years for one inch of topsoil to form. The sub-soils that remain post erosion, tend to be less fertile, less absorbent, and less likely to retain pesticides, fertilizers, and other plant nutrients.
And what about pesticides? Well, to ensure a profitable yield to make it to the marketplace, mass producing farmers are compelled to use artificial means of pest control. As with animal and fish agriculture, run-off puts much of the remaining pesticides into the streams and rivers. Sometimes pesticides are even found in the ground water. The effects of pesticides can be cumulative in animals and humans over time. Some possible side effects of pesticide-treated fruit and vegetable consumption include cancer, birth defects, brain damage, and respiratory illnesses. Eat your vegetables, they are good for you until they kill you.
So far I have focused only on the relationship between food production and sustainability. But there are other human activities that have affected and will continue to affect food quality and quantity. Industrialization has been both a boon and a disease to humanity. Continued innovations in areas such as health care and communication have incrementally improved the quality of our lives with each passing generation. However, production has an evil twin, industrial waste. Mercury and other toxins are being found with higher and higher frequency in our oysters, crabs, and fish. I don’t care how big the fish grow in the Hudson River I am not eating any. Who doesn’t like a crab boil? Well, carefully assess whether or not you eat the crab’s mustard as it may be toxic. When the sources of our sustenance cause our death that is not sustainable.
In the past, an unsustainable way of life led to some form of correction to occur after reaching critical mass. The term correction, in this context, connotes something catastrophic, something horrible. Starvation and diseases would lead to high mortality rates and the population would be reset to some level that was once again sustainable given the amount of available resources. Much of the time people did not stay in the affected location and sought resources elsewhere. Mass migrations would ensue. Of course, this works if there is another place in which to relocate. What happens when there is no place else to go? What happens when the entire globe is being adversely affected simultaneously? Extinction?

Authored by: Andre Baksh, PhD

Seeking a Definition of Barbecue

3b01310rAs barbecue continues to evolve from a rural southern cooking tradition into a global cuisine, more attention has and needs to be paid to its origin and definition. Purists seeking authenticity find themselves hampered by a confused and often false history of the development of barbecue linking its roots to the early colonial Americas. As television cooking shows and a wealth of information, recipes, and methods available on the internet draw increasingly divergent participants to barbecue, the arguments once relegated to backwoods barbecue joints are now being taken up on the patios of the Netherlands and emerging restaurants of Western Australia.

To say that barbecue owes its origins to a primeval fire 250,000 years ago is a tautology. All cooking originates with that fire, so this is as interesting as saying that the first cooking fire is the origin of cupcakes. Saying that the discovery of fire led to barbecue fails to differentiate it from shish kebab, luau, or cake baking. Fire is the first technology, cooking the first technique, and smoke the first flavoring. For the vast majority of cooking history, live fire was the only method, and everything tasted of char and smoke. This definition of barbecue places virtually all cooking under its umbrella and is too broad to be of consequence.

After World War II, as Americans migrated from the front porch to the back patio, barbecue became synonymous with any kind of outdoor cooking. Barbecue simply meant, and largely means today, a meal prepared out of doors, the food served at that meal, a cooking apparatus for its preparation, or a gathering to enjoy it. This gives us the accurate but distasteful ability to say that one attended a “barbecue to consume barbecue, barbecued on a barbecue.” We have tamed and contained the campfire, cleaned it up, and made it as reliable and predictable as our microwaves. Calling this barbecue, globally the most recognized definition, is again too broad.

In the lexicon of the Southern United States, barbecue is typically defined as tough cuts of pork or beef, slow cooked at a low temperature in a smoky environment. Low, slow, and smoke are referred to as the three legs of traditional barbecue. This style of cooking and its flavor profile developed from multiple sources eventually joining together from divergent traditions to become the food that is ordered in restaurants around the world under the banner of barbecue. Finding a single recipe for barbecue is an impossibility since it is a culinary tradition, not a specific menu item, or even a collection of dishes. The stereotypical barbecue restaurant may serve brisket, pulled pork, pork ribs, and chicken, but here the similarities differ. Flavor profiles, sauces (or lack of), side dishes, and presentation will all vary based on the location and specific tradition being followed.

Attempting to trace the root of the word barbecue is as problematic as trying to define its specific flavor. Most etymologists today agree that the origin of the word barbecue comes from barbacoa, as used in many of the Caribbean dialects (Taino in this case) meaning “sacred fire pit.” However, it can also be claimed that the Spanish word Barbacoa comes from the indigenous Haitian word, barbakoa, meaning a framework of sticks. Or it could have come from the Isthmus of Panama where the word was first recorded in print. Historian Andrew Warnes in his book Savage Barbecue, shows that throughout the Caribbean region the word barbacoa does in fact mean a structure of sticks producing a raised platform on which one could sleep, store grains above the ground, or cook. Because this framework is wooden, it would have to be constructed in such a way that the fire didn’t get near enough to the sticks to cause the wood to combust. This would create a slow cooking method with a heavy dose of smoke (which also repelled insects), and might be one of the reasons why this method of cooking became connected with modern barbecue. But how does this differ from live fire cooking used around the world?

Though largely disputed, one popular origin story includes the French term barbe-à-queue meaning from “beard to tale, and refers to a medieval European method for cooking a whole goat on a spit or wooden structure over a live fire. The Anglo contraction of barbe-à-queue is barbeque. Note the “e” in the middle of the word where barbacoa has an “a.” It should be noted that this origin has been too hastily dismissed, if for no other reason than that most dictionaries published in America and Europe during the 18th and 19th centuries listed it as the actual origin of the modern word barbecue. It is also important to note that barbecue has, for the last several centuries had spelling variants like bar-b-q, bar-be-que, bar-b-que, bar-b-cue,  bar-be-cue, barbicue, or barbacue.

The French had strong influences in the Americas during early centuries after their arrival in both the Quebec region of Canada as well as the Caribbean. Many of the northern Acadians migrated to the Louisiana area and became the Cajuns. These peoples brought with them their cooking techniques and language. In the French Caribbean colonies, these techniques were adopted by African slave populations in conjunction with their own traditions. Many of these individuals were sold within the Caribbean colonies or purchased and sent to the continental mainland.

The similarity between barbecue and barbaric has also been noted by historians and etymologists. One story of the origin of barbecue comes from a now extinct tribe who lived in the region of modern Guyana (a place where the wooden barbacoa frame was also used) known for spit roasting their captured enemies, hence barbecuing them. The association of barbecue as a barbaric method of cooking or even execution starts in the later part of the 17th century with illustrations of cannibalistic cookouts and stretches to the 20th century with Peter Pan, author J. M. Barrie, giving fictional pirate Long John Silver the nickname Barbecue, as in, he barbecued his victims.

It can also be noted that term buccaneer, from French boucanier, literally translated asuser of a boucan,” a Haitian native style wood frame grill used for cooking meats, particularly manatee. The buccaneers started out as a group of runaway indentured servants who fled to the then abandoned island of Hispaniola (now Haiti and The Dominican Republic). Cattle had been introduced to the island a century before and their population had grown dramatically. These “buccaneers” smoked strips of meat in a process similar to making beef jerky and tanned hides which they traded with passing ships. They ultimately turned to piracy, creating many of the pirate legends today. Perhaps it is this story that inspired Barrie to connect barbecue with pirates, or simply the European belief that barbecue was a barbaric method of cooking.

Etymologists trace the origin of a word from the origins and meanings of its root parts. If barbacoa does in fact mean “sacred fire pit” then the connection seems evident. If, however, this definition is a later attribution, then barbacoa may in fact, not be the source of the modern term barbecue. Ultimately, the origin of the word and its definition is inextricably linked to the evolution of the method of cooking. Obviously, the indigenous peoples of the Caribbean region did not smoke pork ribs. The fish and iguana seen slow roasting on wooden frames in early illustrations were either grilled or made into a kind of jerky. Barbecue as we understand it today was probably not their goal.

Barbe-à-queue is, by definition, a method of whole animal cooking, and the mainstay of early barbecue. The 1828 Webster Dictionary differentiates between the barbecue of the West Indies, where it was whole hog, and the barbecue of the United States, which was typically Oxen. Similarly, dictionaries in England refer to barbecue as the cooking over a live fire of a whole animal. References to barbecue in the 17th and 18th centuries universally refer to the live fire cooking of whole animals. In a 1707 pamphlet, Edward Ward described the roasting of three pigs, “broiled under an apple tree” in Peckham, Jamaica. Again the details are short but he tells us that these hogs were cooked in the “West Indian manner,” meaning with the heads and tails on.

It might be too much to assume that barbecue migrated directly from the Caribbean into the southern colonies, but a strong argument can be made for it. There is, however, another predominant path that starts in Latin America and ends in central Texas. This might be the path of barbacoa, after all, it is a Spanish word. In the days before Sam Houston and for decades afterwards it was common for Mexican cowboys to hold a barbecue on cattle drives which consisted of slow roasting the head of a cow over a live fire. Today in Mexico, barbacoa refers to the slow cooking of meats, particularly whole sheep over an open fire or more authentically, in a covered pit.

There is no simple way to reconcile these apparent contradictory lineages. Like all things American, barbecue is a fusion of multiple cultural traditions combining with the events that built the nation. The hogs brought by DeSoto and the emergence of the massive cattle industry provided an abundance of available meat, particularly the undesirable cuts that barbecue is so good at turning into a tender and flavorful finished product. The waves of immigration, voluntary and forced, provided the cultural influences that adapted cooking styles, innovated sauces and seasonings, and produced new ways for adding smoky flavors. Similarly, the word has evolved from multiple understandings of barbecue over the course of the past 400 years.

Barbecue is a culinary tradition that emerged from a multi-ethnic America. As distinctly American as jazz and equally capable of universal appeal, it will continue to evolve and grow as it spreads around the world. Of course, there will be arguments of authenticity and origin, but these are largely academic.  Like any cuisine, barbecue is a living thing and it will change to adapt to ingredients, techniques, flavors, and location as it grows. Barbecue is more than just smoked meats; it is a combination of flavors, accompanying side dishes, and presentation, but more importantly it is a tradition.

Authored by, Derrick Riches of, the #1 Barbecue and Grilling site in the world.

Curry: The Quest for Dominance in Plated Form

chicken-tikka-masala-9_lA few years ago, I happened upon an article naming Chicken Tikka Masala the number one food in the United Kingdom.  I found this intriguing, though unsurprising given the history of British involvement in India.  What I did find most interesting was that this type of cuisine, reintroduced to the United Kingdom by Indian immigrants and adopted by newer generations, would have been scoffed at less than a century ago for being too pungent, spice filled, and harsh on British sensibilities. Despite popular Victorian concepts of curry and its recently discovered prototype created some 4,000 years ago in the Indus Valley region, this particular variety shines a spotlight into the negotiations between Indian cooks and English bellies.

Chicken Tikka Masala is believed to be a hybridized dish created by a Bangladeshi cook in 1960’s Britain, who tried to quell the temper of an impatient customer requesting a sauce on top of the chicken tikka (a spicy marinated, grilled chicken dish). The secret ingredients?  Tomato soup and yogurt. This original concoction sounds mild to say the least. Many varieties of curry are far from sedate and can render the diner speechless. If this sounds familiar then you’re no stranger to beads of sweat pouring from your brow or having the heat of the chilies growing exponentially in your mouth, while simultaneously reaching for beer, water, or yogurt to dull the ache.  If you are anything like me, you’ll continue eating. The complex flavors far outweigh the perceived pain, and to know a good hot curry is in some way akin to understanding India.  Those who did not grow up with this delicious popping medley of spices would, no doubt, be put off by its perceived audaciousness.  Coupled with the overbearing heat, strong odors, cultural differences, and preconceived notions of the “right way versus the wrong way” and you have the framework for British India and its love/hate relationship represented in plated form.

Victorian British diets largely consisted of gravy laden roasts, stews, casseroles, and pastries.  They also consumed fruits such as, pears and apples as well as vegetables of the old and new worlds.  Most officials were used to orderly homes, with cooking staff manning large kitchens, spacious enough to roast pieces of freshly slaughtered calves or game.  India did not afford them their cuisines of choice, nor familiar flavor profiles.  Indian kitchens were often smaller, where cooks sat on the floor preparing foods in cramped, smoky conditions. According to Victorian cookbook author Wyvern ( Culinary Jottings for Madras), trying to implement an upper class British kitchen in India was all for naught.  He recalls an incident in which an army official, through civilizing efforts, built several large barrack kitchens.  Proud of his endeavors, he proceeded to inspect them only to find each kitchen empty and unused.  When he finally reached the male cooking staff holed up in an Indian style kitchen, he promptly scolded them for their abandonment and was met with equal scolding from the head cook, who argued that British kitchens are “bad sense kitchens” that took too much firewood and the structure all wrong for fast cooking.  Wyvern continued discussion of what he described as his “child-like” Indian cook, Ramsamy, using him as an example to guide Britons in administering functional Anglo-Indian homes. Instructions included the “proper” treatment of Indian cooks, akin to coercing an impetuous child, without offense. This indicated the tenuous relationships often hidden in contemptuous side glances between servants and the head of household. He spoke strongly, as if to show his dominance via the word, but in the same breath retracted his belligerence for a more delicate approach.  He warned his readers that British kitchen wares would be rendered useless, as those of the sub-continent will do as they have always done. It was not uncommon for British ladles, stock pots and other utensils to be thrown aside for native wears or reused in some alternate fashion.  Cooks remained vigilant in their comforts and training in the art of regional Indian cooking, causing great stress to their British employers.  Though, these individuals were servants, the wrong word or intonation would set abound what Wyvern considered, meal time “catastrophes.” Cooks were frequently asked to leave out integral ingredients to native dishes.  They retaliated by adding their flavor to British cuisine, and so the struggle went back and forth as the subjugator, successful in administering sections of the country for benefit of the metropole (Britain), could not administer properly the cooks in his kitchen.  Meal time served as a triumphant exercise of will, which the Indian cook often won of his own accord, though Indians functioning outside of the kitchen were also subject to criticism of character based on food choices.

In the late 19th and early 20th centuries appeared a new “breed” of Indian.  Efforts by British officials to nurture the new Indian, included encouraging some, not all, Indian males of proper caste and social standing to be English educated, dressed in western clothing, and to conduct themselves as  British men.  Young Indians hoping to secure the successes of Europeans, not only ascribed to the teachings of the new Indian, but adopted their eating habits as well.  Meat eating became the yardstick by which masculinity was measured. For the British, meat eating, especially beef was thought to project forth masculine qualities.  Since most Indian men of the Hindu faith did not eat meat, they were deemed effeminate based on the perceived lack of substance in their diets.  After all, in the British consciousness, their control of India proved intellectual and physical prowess over their subjects and therefore, what they consumed was ideal. This thinking was also pervasive at home.  Beef eating in Victorian England was considered a masculine activity.  Not only was it pricey to procure a cut of beef, but to have it regularly set them above the rest.  Some Indian men and even women did partake in British style meats, while many were sorely disappointed by the lack of flavor and the scant use of basic ingredients like salt. One would have to stop and ask at this point why Indians believed in the power of meat eating, especially given the contradictory nature in which Indian food fit into the discourse.  On one hand, returned British officers and officials often reminisced of the great curries they tasted.  In fact, boasting was the usual fancy of those indulging in spirits.  It was not uncommon to hear arguments develop over which region of India possessed the hottest curries and, who could withstand its intensity.  Spicy curries became synonymous with masculinity for some already in India or those repatriated, longing for India.  These same curries suffered criticism for being abhorrently over spiced concoctions deemed unfit for the British palate.  The same curries that Sahibs and Memsahib’s required to be made the British way (prepared with mild, thickened gravy over meat and vegetables).  On the other hand, there was the matter of Indian social structure which pit vegetarians on top of the social hierarchy.  Brahmins, the highest caste, did not consume animal flesh, nor did they come into contact with those who handled it.  This was believed to be a polluting activity that would strip them of caste status.  In most cases, true Brahmins would not share a meal with Britons as they were considered without caste and were often served by the lowest of the social strata. The very act of eating relegated the Victorian official as an untouchable, while Indians partaking in meat eating became the brunt of jokes by those who did not succumb to foreign ways. Whatever the belief either by Indians or Britons, food became the cornerstone of power struggles, envy, acquiescence, and even sentimentality between Anglo Indians and Indians.

In her memoir cookbook, Curries & Bugles, Jennifer Brennan speaks longingly of her youth growing up in the Punjab and Kashmir regions of India. Her book contains many references to club food, but has an interesting mix of hybrid dishes, no doubt concocted by Indian cooks to appease British appetites while executing Indian flavor profiles.  For example, bacon and coriander pancakes, Khichiri (an Anglo-Indian inspired dish of rice, lentils, onions, and a mild assortment of spices), Chicken Stuffed Apricots made with masala,  Sardine Curry Puffs, etc. Of course, her book does contain many recipes for regional curries and standard British fair.  Her recollections are a far cry from Wavern’s book that encourages the upkeep of Britishness in the home and cuisine. He disparaged the use of “country parsley,” or coriander leaves, and the studding of cloves on the outer surface of ham, (a method now widely used in the West), stating that any Indian cook found doing so should be fined one rupee for such an indiscretion.

Despite traditionalists such as Wyvern, curry soon filtered down to British lower class homes, marketed as an economical dish that stretched cheap and meager ingredients into larger meals.  Daniel Santiagoe, a cook from Ceylon (Sri Lanka) was tasked with writing a small cookbook, A Curry Cook’s Assistant, for English cooks.  His curry recipes contain basic ingredients such as cumin, coriander powder, cloves, and chilies, but added items such as thick English style gravies and milk to make it less fearsome for the Englishman to consume.  He incorporated English potted and tinned meats, game, and bacon into curry, but there still contained commentary in his book that showed criticism of British abilities.  First, it was not uncommon for Indian cooks to return to England or Scotland with their employers to teach others in the home kitchen how to prepare regional Indian cuisine.  Second, Santiagoe’s experience in England left him unimpressed with British curry and found them to be “good results, not in its original taste, but only second to it.”  Prepackaged Anglo-Indian curry powders were in Santiagoe’s opinion, incorrectly balanced, quite sour and never as good as fresh spices, toasted, ground, and added to the pot. He goes on to say that curries for the British palate should be on the milder side, but a “hot curry is considered always nice and healthy.”

This is a far cry from what surfaced with the influx of immigrants to the United Kingdom during the 1950s-70s.  Immigrant-owned restaurants became all the rage, with later generations looking forward to indulging in a hot curry and a cold beer.  This in many ways resembled the boast of masculinity displayed by British officers and officials of yesteryear; an admission that indulging in hot Indian curry was proof of masculinity, thereby nullifying old preconceived notions of the feminized Indian diet. Curry had taken hold so much that immigrant chefs were soon outnumbered by those of European ancestry specializing in the craft of Indian cuisine.

While it seems evident that an ebb and flow exists in the perception and preparation of curry, one cannot deny the simple fact that building an empire leaves no party unscathed.  The moment one departed from the shores of England, cultural fragmentation occurred.  Those spending upward of 20-30 years or more in India were never quite British again.  Many returning home found it difficult to reassimilate and longed for the smells, experiences, and flavors of the subcontinent. Britain may have administered India, changed its topography, forged great advancements in terms of historical records and a railway system, and even subjected its people to poverty and hunger based on policy, but what India changed in Britain was its flavor.  Food, a basic human need, the driving force of conquest, riches, and reward was ultimately the final battle that Britain in its limitless contradictions succumbed to when it met, controlled, and ultimately fell in love with its long held romanticized notions of India.  After all, in the palace kitchen of Queen Victoria, Indian cooks made and served curries daily. Whether she partook of the feast is up for debate.  However, many have considered her responsible for making the dish fashionable in England. A fashion that still rears its head in newspaper articles listing an Indian inspired dish, Chicken Tikka Masala, number one in the country.

i Anglo-Indian:  People of British ancestry living in the “subcontinent,” meaning the South Asian region or pre-partitioned India.

ii  Sahib and Memsahib: Are titles given to individuals; usually of European ancestry in positions of authority.  In their most basic forms, they mean “sir” and “madam.”

Authored by:  Sabrina S. Baksh, MA

Photo credit: A.http://foter.com BY-NC-ND

The Economics of Sustainability: A Primer (Article 1)

This introductory article lays the foundation of a series of articles exploring the economics of sustainability. The purpose here is to foster discussions of what is sustainability and does sustainability matter. My focus is admittedly anthropocentric. So, let’s get on with it.

In the context of humanity, when we speak of sustainability, we not only speak of providing opportunity for a desirable quality of life for the current generation but also ensuring an equal or better quality of life for all future generations. Thus, integral to sustainability is replaceability. If resources are not replaceable, then their utilization is not sustainable. As resources become less and less replaceable they become more and more scarce. When we speak of scarcity, specifically economic scarcity, we focus on resources essential not only to our survival as a species but also to our quality of life. Such resources include usable land, potable water, clean air and time. When resources are scarce or can become scarce, we are compelled to make careful decisions about how to use them. As with all decisions, there are varying degrees of costs associated with choosing one or a set of alternatives over others. Here these costs are explained in terms of opportunity loss.

In most instances we are not fully apprised of the consequences of choosing one or a set of alternatives in the way we use resources over others. Our decision making can be fraught with errors because the available information is incomplete or misinterpreted. Actually, we rarely know if ever at all with certainty whether or not a particular approach we use or activity in which we engage is, indeed, sustainable. Furthermore, simply because something appears to have been sustainable until now, does not necessarily imply it will continue to be sustainable forever. Likewise, just because something has not been sustainable up to present, does not necessarily mean it could not have been sustainable until now and into the future. So, what are we to do? Are we to do nothing because we are uncertain of the consequences of our actions? No. Instead, we use available information to make judgments about the likelihood that something or some action will or will not be sustainable. The less information we have, the greater the uncertainty, and the greater the likelihood of high opportunity loss. As we gather more information about one activity or another, we are able to update our beliefs and make stronger judgments about whether a particular way of using a resource is sustainable. When we ignore these steps, we open ourselves into incurring prohibitively high opportunity losses. The level of opportunity loss is equated with the degree of unsustainability. Stated another way, unsustainability is a function of opportunity loss.

Let us take arable land as an example. The availability of arable land, all land for that matter, is fixed. We cannot make any more. What we have, for all intents and purposes, is all there is. Sure, we can engage in land reclamation from the seas as the Dutch have done via dykes and polders. But that process is not only very expensive, the results are very limited. We can hope that the coming generations are able to develop technologies that will allow them to explore and terraform new planets. But decision making based on hope is born of desperation. Besides, how certain are we that future generations will have the ability and available resources to develop such technologies if we deplete those resources in the present? Again, when we make decisions based on little or no information we end up incurring prohibitively high opportunity losses and are saddled with unsustainablity.

Thus, given the supply of arable land is fixed, how do we utilize it to improve or maintain our own quality of life while assuring the same for future generations? We can use it to raise crops, which require irrigation and soil replenishment via fertilization. Or, we can use it to raise livestock, which requires food (grass and other vegetable matter) and water. Or the land could be used to raise a combination of both crops and livestock. The question is which crops/livestock. Do we settle for one type or a variety? Often times that decision is shaped by external forces, such as climate. We cannot successfully raise pineapples in the wheat belts of Canada. Likewise, we would be hard pressed to harvest pine nuts in the Amazon. Nevertheless, we are able to raise a variety of crops given climatic constraints. That is, regardless of climate, we are rarely compelled to raise only one type of crop. In the temperate zones, for instance, we are able to raise and harvest a variety of fruits (apples, pears), vegetables (lettuce, cabbage) and grains (wheat, corn). We are able to do the same in other zones of the planet. It is the types of fruits, vegetables and grains that will differ. On the other hand, strains of the same types of livestock have been bred to thrive under various climatic conditions. Thus, so as long as we return nutrients to the soil and maintain the water supply, we can continue to produce output in a sustainable way.

Regardless of our best efforts to regenerate the soil’s fecundity, there are external forces that can render our way of life unsustainable. One such exogenous factor is climatic change. Civilizations have fallen due to climatic changes. The Indus/Harappan civilization, for instance, was highly dependent on the Sarasvati River (located somewhere between, modern day, northern India and Pakistan) to support its agriculture. A gradual decline in rainfall eventually dried out the river, leading to agricultural collapse and mass migrations of people out of the region. Climate changes can transform a fixed resource to a variable one. When this happens, a population is no longer able to sustain itself when those resources decline. When change is bad, it’s time to pack up the family and move on out.

Policies imposed by external rule can also lead to unsustainability. Between 1876 and 1878, India suffered a large scale famine in which up to 29 million people died. Although, this was a period of intense drought, it was the British Raj imposed policy of grain commodification and cash crop cultivation that turned a bad event into a disaster. The death toll could have been drastically minimized had the grains produced by the local population been consumed within India. Instead, a record 290,000 tons of wheat was exported to England. It was a decision that resulted in prohibitively high opportunity losses for the Indian people. However, with the land having to support less Indians after the famine, there was enough of a surplus of wheat production to keep the process sustainable, at least for the people of the British Isles. This example brings to the fore the question: sustainable for whom? Happy, well fed Britishers were sustained by starving and dead Indians.

Other times, sustainability is affected by internal forces. Belief, for instance, can influence how resources are used. France was faced with a famine between 1788 and 1789 as harsh winters (a little ice age or sorts) led to dwindling wheat harvests. This is important given that bread was the main source of nutrition for a majority of the country’s population. The climatic change compelled farmers throughout most of Europe to abandon wheat and adopt the potato as the staple crop. The potato had been introduced to France about two centuries earlier. Nevertheless, the French still viewed potatoes with deep suspicion. Given the potato is a root, it was considered fit only for the lowest class of French society. Why? Medical and popular opinion accused subterranean crops of causing “phlegmatic” diseases. So strong were these beliefs that in some parts of France, regulations were enacted to ban the cultivation of potatoes, even for animal consumption. Eventually, the French came around and adopted the potato into their cuisine. I suppose there is nothing like a good bit of death and starvation to convince the French that pommes frites are good eats.

Historically, as people become more efficient at utilizing fixed resources, such as arable land, they begin to thrive, which translates into population growth. The population would eventually hit critical mass and become increasingly unsustainable. The likelihood of unsustainability grows more pronounced with climatic changes, other outside influences and internal beliefs. All of these factors usually result in catastrophic conditions, which eventually return the population to a sustainable path. Currently, however, population growth has been by and large Malthusian and there has been no seriously catastrophic event or set of events to offset it. Science and technology has come to the rescue in staving off the critical mass problem. The question is can this condition continue to hold? If so, for how long? I plan to examine and answer these questions in future articles. I plan to examine other questions as well. Such as, is there a particular social structure that is more sustainable than others? If so, what does it look like? And, what steps can we as individuals take to lead sustainable lives? I also encourage you, the reader, to comment and ask questions. Let’s have a fruitful discussion.

i The temptation here is to speak of regeneration. I prefer replaceablilty because not all resources can be regenerated. Minerals, for instance, are by and large fixed in supply. Yes, many can be regenerated after long periods of time. But this happens in geological time, not human time.

ii You may be familiar with a very similar concept called “opportunity cost.” There is, however, a difference between opportunity cost and opportunity loss. Opportunity cost is measured in terms of gains and losses, while opportunity loss is a measure of regret. The greater the difference between what we thought would occur and what actually occurred the greater our regret and the higher our opportunity loss.

Authored by Andre Baksh, PhD

What does it mean to be a historical epicurean?

It was not until my graduate studies, did I realize that my love of food and history would mesh so beautifully.  Food is a basic need of all mankind, hence its movement from place to place via land and sea.  Our ancestor’s love of spices appeared boundless, as they bargained, usurped, enslaved and killed to infuse flavor to food while, most often adding wealth to their pockets.  Sugar moved millions across the globe, rearranging previously held beliefs of geographical spaces and the skin color of those who now reside on them.  Salt, peppers, potatoes, tomatoes, all experienced the move from here to there.  New cuisines emerged, causing our forefathers to indentify regionally or even nationally, without realizing that they were the originators of “fusion” cooking.  Let’s not forget that with the travelling of food and flavor comes the methods of preparation; ideas of fire; roasted, grilled, boiled, broiled, sautéed, stewed, souped, poached, kebabed, pressed, mashed, grated, peeled, infused, stuffed, and of course, barbecued.  My, how our world has turned topsy turvy over the gnawing needs of our bellies and the luxurious wants of our palates.  This is where The Historical Epicurean will venture forth, as your guide to the history of food, popular concoctions, and use of these commodities in non-food related ways.  This historical record will not solely focus on Atlantic trade nor dawdle on the Silk Road, but will (to the best of our scholarly ability) be an exhaustive look at world-wide movements of food.  So, be prepared to learn in multiple ways and most of all, enjoy peering into the past in order to understand what is presently on your plate.