Home page from AACH2014

Fought across the world, the First World War struck deepest at home. Few neighbourhoods, villages, towns or regions emerged untouched by the global conflict of 1914-18. This year’s Anglo-American conference takes as its theme the impact of the First World War on the locality and local institutions, on the family and social life, and on the memorialisation of war in the built environment and in private life.

Co-organised with the British Association of Local History, the Victoria County History and the American Association for State and Local History, the conference aims to be an international festival of local history seen through the lens of war. Our focus is not restricted to the UK, but will cover ‘home fronts’ across the world, including those of Britain’s empire, allies and other combatant nations. The conference is also keen to show-case current research projects relating to the First World War, the teaching of the history of the Great War, and the 1914-18 period in the media, visual arts and museum world then and now. Our plenary lecturers include Jay Winter (Yale), Bill Nasson (Stellenbosch University), John Horne (Trinity College Dublin) and Christine Hallett (Manchester).

On behalf of the Anglo-American conference 2014 Programme Committee:

Mark Connelly (Kent)
Santanu Das (KCL)
David Killingray (Goldsmiths College)
Miles Taylor (IHR)
Kate Tiller (BALH; Kellogg College, Oxford)

Register for the AACH14 online

Programme day 1 – Thursday 3rd July
Programme day 2 – Friday 4th July

“The Great War at Home”: a Senate House Library Perspective

Illustration: F.S. Brereton, With our Russian Allies (1916)

Illustration: F.S. Brereton, With our Russian Allies (1916)

This post was written for us by Karen Attar from Senate House Library Special Collections.

The conference “The Great War at Home” supplied an excellent opportunity for Senate House Library to provide a small complementary display. The only problem was how best to use a limited space. To the extent that we had a focus, that focus was publishing, and within the theme of publishing, Oxford University Press – especially timely in so far as a new History of Oxford University Press was published last year. In August 1914 seven members of the Modern History Faculty of the University of Oxford promptly set to and wrote Why We Are At War: Great Britain’s Case, in order to set forth the causes of war and the principles they believed to be at stake. This was the first of 87 OUP “pamphlets” about the War, although with 206 pages there was little of the pamphlet about it.

The Delegates of Oxford University Press approved the book’s publication on 16 October 1914, at their first meeting of the new academic year – by which time it was already in its third edition, the one displayed. The copy shown is from a collection of about 530 books and pamphlets pertaining to the War brought together by the pacifist historian Caroline Elizabeth Playne (1857-1948) in connection with the books she wrote about the conflict. The other OUP book shown is homage to Shakespeare for the tercentenary celebration of his birth: evidence of the continuation, albeit in severely limited form, of academic publishing during the war.

Children’s adventure stories set against the backdrop of the Great War and stereotypically full of valiant English youths and cowardly, underhand Germans, some of them spies, give insight into how in an unrealistic form the war pervaded children’s consciousness. An example of such literature was also displayed, With our Russian Allies by the extremely popular Frederick Sadleir Brereton.

All of these are examples of “The Great War in England”. We interpreted “home” more narrowly with Roll of War Service, 1914-1919, commemorating the losses in war of members of the University of London Officers Training Corps: seven officers and some 670 cadets.

In previous years Senate House Library’s contribution to the Anglo-American Conference of Historians has been purely to curate a display. This year the topic enabled the Library to give a conference paper, again seeing “home” as the host institution of the conference. Karen Attar, who had previously delved into the history of the Library during the Second World War, extended her researches backwards to the period 1914-1918 to talk about the University of London Library then. Documentary evidence is sparse compared with that for the Second World War, so that an initial fear was of not finding enough to say. There was no need to worry, and a twenty-minute talk expanded to fit forty minutes. Several interesting points emerged in the course of preliminary reading, such as better air raid precautions for the First World War than for the Second, and a suggestion that books would be safer on the central University’s premises in London than in Cambridge.

Local history and the First World War

Kate Tiller, founding fellow of Kellogg College, University of Oxford and chair of the British Association for Local History 

As the centenaries of 1914–18 finally come upon us, the challenges facing historians to research and interpret the impact of the First World War multiply. One is the need to investigate and understand the War more widely, recognising the importance of perspectives not previously considered significant, and turning attention to the Home Front; the wartime experiences of women and children; the economic, social, cultural and political consequences of the War; the Empire and dominion experience; and to military events beyond the Western Front.

Another challenge is to revisit and scrutinise deep-rooted, existing assumptions about the War. David Reynolds, in a recent, cogent dissection (The Long Shadow: The Great War and the Twentieth Century, 2013), characterises the British view of the First World War as particular. Centred on the trenches, on military events and heavily influenced by literature and poetry, it perpetuates a verdict that was influential at the time of the 50th anniversary. This sees the War in hindsight as a futile sacrifice, a bitter and costly conflict, which failed to end all wars and led to another, more clearly justifiable, World War only 21 years later.

A third challenge is that posed by the growing demand for a popular and public history of 1914–18, a history to be shared between generations and places, disseminated by broadcasters, heritage professionals and teachers, in classrooms and on field trips. Amidst the growing hype, threatening at times to tip into unreflective cliché or even centenary ‘celebration’, local history has a special and important part to play. As the challenges point First World War studies away from single-perspective, one-narrative accounts, local history offers a way to respond. Returning to the local experience and using and integrating the rich, direct contemporary evidence enables realities of wartime throughout British society to be rediscovered. We may unearth, preserve and record new evidence; generate fresh findings; pursue shared questions; encourage comparative thinking; and join up accounts of separate aspects of wartime and post-war experience within and between communities to move us on, as David Reynolds urges, to combine remembrance with greater historical understanding.

This is an ambitious agenda. Projects and publications are beginning to show how it can be fulfilled, and examples are reported here. More are promised, including events at Senate House and initiatives by the British Association for Local History (BALH), which aims to encourage and support the study of local history as an academic discipline and as a rewarding pursuit for grass roots historians, individuals and groups. The two organisations combined on 28 February for a joint Institute of Commonwealth Studies/BALH day on ‘Experiences of World War One: strangers, differences and locality’. Keynote speaker, Dr Catriona Pennell, emphasised that, although a national narrative of the War’s history had dominated earlier study, fuller understanding depends on adding local and international perspectives and being aware of the constant interconnectedness of all three elements – local, national and international.

This theme was played out during discussion of the interaction of local people in Britain with black and Indian troops from the Empire and Dominions. A mixture of newspapers, diaries, letters, recollections, photos and official records provided the evidence. Wartime connections came through local camps and hospitals. Racial stereotyping, mixed marriages and outbreaks of violence all figured, but meetings of cultures were not just made by war, with some networks of family links operating before 1914 and after 1918. Nor were all ex-servicemen white, UK residents, as demonstrated by several case studies of West Indian veterans returning to their homes in the Caribbean. There, November rituals of remembrance were kept at local war memorials, while island economies struggled, not least because of continuing debt burdens linked to their support for the mother country’s war effort. The local, national and international did indeed interact to form these experiences of the War.

Elsewhere, increasing publication of Home Front studies is bringing the non-military experience in the UK to the fore. From 1914, every kind of neighbourhood, village, town and region was touched, not only by the deaths and injuries of those going away to fight but also by the immediate demands and lasting changes felt by those who were ‘left behind’, and were willingly or unwillingly directly affected by war. The whole economy was mobilised, while massive volunteer effort was forthcoming. Local histories of this experience are showing the illuminating balance to be struck between detail and generalisation, and the potential for both comparison and understanding the particular. The latest Victoria County History Essex volume (XI, on Clacton, Walton and Frinton: North-East Essex Seaside Resorts, 2012) brings home, in its chapter on the War, the threat of invasion, air raids and the black-out, and the loss of holiday business that made for a very specific East Coast, seaside experience of 1914–18.

Another recent publication (Local Aspects of the Great War: Coventry and Warwickshire 1914–1919, 2012) reflects a more general range of Home Front research topics in ten related studies. The canvas chosen is one county (for these purposes Coventry and Warwickshire, but not Birmingham). As the editor, local historian Chris Holland, argues this scale of study allows a balance between detail and generalisation and the possibility of challenging commonly held views. It is an aim impressively achieved through examinations of an area including large and small towns, major industries, artisan and labouring families and rural, agricultural communities. The topics covered represent an agenda that will be useful to others looking to undertake local studies spanning the war years. The themes are the outbreak of war, Belgian refugees, recruitment, billeting, caring for the wounded, wartime industrial production, food, local tribunals for exemptions from military service, the ‘Spanish’ flu epidemic of 1918–19 and responses to the Armistice.

These are discussed with a telling and humane attention to the stories of individuals and families, while reminding the reader of how these experiences were a direct part of wider determinants and trends, from DORA (the Defence of the Realm Acts), to the formation of the Women’s Land Army, to the rise in the cost of living by nearly 50 per cent between 1914–16, to the addition of 3 million acres of land under cultivation. Alongside this are some equally striking local facts and figures. Kenilworth found land for, and established, 104 new allotments in one month. In Coventry, White and Poppe, a light engineering company employing 350 people in 1914, rapidly became one of the largest munitions factories in the country, having employed 30,000 by the end of the conflict. Its workers, including many women, filled 30 million fuses and 31 million detonators, while the firm also produced War Office vehicle engines. The whole operation included housing and hostels, canteens, allotments, a swimming pool, library and cinema. By 1917, VADs (Voluntary Aid Detachment) were running 17 hospitals in Warwickshire, that in rural Kineton growing to provide 82 beds.

The work of the eight contributors highlights many realities, including the degree of pre-war preparation carried out by military and civil organisations, and the enormous practical complexity of coping with war conditions, from transport, to telegrams and post, to civilian medical services with large numbers of doctors and nurses on war service, to labour in shops, factories and fields. The role of women, revealing some resistance to their growing employment, is observed along with the degree of class tensions, from a strike at White and Poppe to apparently seamless assumptions of local leadership by traditional elites. A legion of committees and activities was organised, with an outpouring of voluntary effort aimed at ‘doing our bit’. How this was turned to effective action, and how far controlled locally or subsumed in centrally directed government initiatives is another recurrent theme.

Local studies also allow us to look afresh at the familiar. The main war memorial at Colchester, unveiled in 1923, is one of tens of thousands of local memorials in the British Isles. They are telling subjects for local research into the relationship between remembrance and community, as each place made its own decisions on how to commemorate their dead. Most war memorials took the form of permanent monuments, sometimes collective, sometimes to groups or individuals. Some favoured practical projects and buildings looking to the better future secured by the sacrifice of the dead. Although the creation of fitting tributes was a near universal response, the memorials themselves are far from uniform. Many record the names of individual combatants (presented in a significant variety of ways), but they also reflect the circumstances, attitudes, funds, tastes and sometimes disagreements of families and comrades, of influential local individuals and institutions, and of others in the wider circles of connection and remembrance which influenced the making of each structure.

The main First World War memorial in Colchester is just one of some forty in the town, a vivid reflection of the many community activities – school, work, church, sport, voluntary organisation – the dead of 1914–18 might have been part of. The collective and apparently democratic nature of the process of making Colchester’s main memorial is reflected in the 40 different groups, from the Scouts, to ‘Married Women’, to religious denominations, political parties, Freemasons, friendly societies, secondary schools, local employers and utility companies represented on the War Memorial Selection Committee. Formed as early as January 1919, it energetically debated six alternative forms of commemoration – public baths, school of art, purchasing Colchester castle, a memorial hall, a hospital wing and a monument. It was the last which won out, and the committee minutes detail the deliberations, fundraising, the purchase and gift of the site, choice and commissioning of the memorial with its statues of Victory, Peace and St George, the composition of the wording (referring to both the military dead and the other men and women ‘who stood for King and country & bearing arms or by their work helped to win the war’), and finally the elaborate unveiling ceremony.

The memorial became the focus of regular remembrance, those public rituals intended to ensure that the dead and what they died for remain in local consciousness. This too is rich ground for research. In November 1938 the mayor of Colchester, speaking at the war memorial, ‘invited his listeners to ask themselves whether or not the concept of remembrance had become meaningless and sentimental, and whether the sacrifices of the Fallen had been in vain’. Plaques have now been added to the monument to commemorate the dead of the (in a curiously understated phrase) ‘further war’ of 1939–45, and – in this army town – to soldiers killed since 1945 while on service or through terrorist acts.

Through its publications, BALH hopes to develop ideas and methods for local studies of wartime experience. These include a guide to researching local memorials and their significance (Remembrance and Community: War Memorials and Local History by Kate Tiller, 2013). Its quarterly newsletter Local History News is carrying a series of short articles on different themes, which can be read on www.balh.co.uk. Other publications are:

  • Memorials of war (Gill Draper) LHN103, spring 2012
  • Community responses to the outbreak of war, August 1914 (Catriona Pennell), LHN 104, summer 2012
  • The agricultural community at war, 1914–1918 (Bonnie White), LHN 105, autumn 2012
  • Soldiers’ letters and the First World War (Rachel Duffett), LHN 106, winter 2013
  • Women and work in the First World War (Deborah Thom), LHN 107, Spring 2013
  • Schools in the First World War (Tim Lomas), LHN 108, Summer 2013
  • The railwaymen who went to war: stories held at the National Railway Museum (Alison Kay), LHN 109, Autumn 2013
  • Service and sources: compiling local narratives of WW1 military history (Richard S. Grayson), LHN 110, Winter 2014
  • War resisters in Britain during the First World War: an opportunity for new research (Cyril Pearce), LHN 111, Spring 2014 (forthcoming)
  • Impact of the War on country estates (Allen Warren), LHN 112, Summer 2014 (forthcoming)
  • Impact of the War on London’s minorities (Jerry White), Autumn 2014 (forthcoming)
  • Local responses to food shortages (Karen Hunt), Winter 2015 (forthcoming)
  • Children’s experience of the FWW (Rosie Kennedy), Spring 2015 (forthcoming)


A flagship event will be this year’s Anglo-American Conference for Historians, ‘The Great War at home’, to be held at the IHR on 3–4 July. It will be jointly presented by IHR, BALH, the American Association of State and Local History (AASLH) and the VCH (see anglo-american.history.ac.uk/). The theme is the impact of the War on the locality and local institutions, the family and social life, and the memorialisation of war in the built environment and in private life. It aims to gather together local and community historians, academics and graduate students to present and exchange their findings and ideas on all aspects of the impact of the War, in the UK and worldwide.

The conference will reflect the momentum and direction of work already underway. It will also point ahead, as a joint session, bringing together local historians from BALH, the Family and Community Historical Research Society and the AASLH, will explore shared interests and possibilities for an online network of local groups to research themes in Home Front studies. This will be another step towards realising the potential of local studies to respond to the challenges faced by historians of the First World War and its impact.

Food in History – Anglo-American conference 2013 Day 2 podcasts

Day two of the 82nd Anglo-American conference of Historians continued the wide-ranging discussion of food throughout history. From the second day we recorded two plenary talks and a lunch time policy forum. These are now available as podcasts on History SPOT.

Policy Forum: The politics of food: past, present and future
Chair: Frank Trentmann (Birkbeck/Institute of Sustainable consumption, University of Manchester)
David Barling (Centre for Food Policy, City University)
Annabel Allott (Soil Association)
Keir Waddington (University of Cardiff)
Craig Sams (Green & Blacks)
Susanne Friedberg (Dartmouth College): Moral economies and the cold chain
Cormac O’Grada (University College Dublin): Famine is not the problem: an historical perspective

All podcasts from the plenary sessions of the Anglo-American conference are available on History SPOT under the Anglo-American Food in History section.

Food in History – Anglo-American conference 2013 Day 1 podcasts

Day one of the 82nd Anglo-American conference of Historians is now over and has already produced a lot of debate and discussion. The topic this year is food in history and we have two plenary sessions for you as podcasts. These are fascinating talks by two scholars uniquely qualified to talk on the subject.

First up was Ken Albala (University of the Pacific). His talk was a proposal for a unified theory of culinary evolution for the past 2,500 years. At the beginning of his talk he noted that he would be attempting to explain why there appears to be a recurring osculation between two fundamentally opposed aesthetics to food; periods focused on elite cooking verses periods focused on simple rustic fair.

The second plenary produced today as a podcast was by Steven Shapin (Harvard). Shapin talks about the saying ‘you are what you eat’ and how understanding of what this means has not only existed throughout time, but has radically changed as well. As someone in the audience said at the end, today it’s not always what you eat that shapes who you are, but what you don’t eat. Never in the past has this been the case.

To listen to these podcasts click on the link below:

Ken Albala (University of the Pacific), Toward a historical dialectic of culinary styles

Steven Shapin (Harvard), You Are What You Eat: Historical Changes in Ideas about Food and Identity

‘The most important thing in the world': food and modern conflict

Bryce Evans, Lecturer in History, Liverpool Hope University

Food and war

‘When I was a small boy at school’, writes George Orwell, ‘a lecturer used to come in once a term and deliver talks on famous battles’. ‘He was fond of quoting Napoleon’s maxim “an army marches on its stomach” and at the end of his lecture he would suddenly turn to us and demand, “What’s the most important thing in the world?” We were expected to shout back “Food!”’

As several speakers at this year’s conference have demonstrated in their published work, food and conflict are closely linked. One of the simplest definitions of economics – the allocation of scarce resources among competing ends – suggests why.

This is not to reduce the causes of war to rivalry over resources. Violent conflicts occur for many other reasons. But the link is an enduring one. Today, volatile oil prices have pushed developed countries towards biofuels (fuel made from food), further intensifying the historical relationship between food and war.

Food as a weapon of war

The use of food as a strategic weapon is nothing new. Texts as ancient as the Chinese Art of War and the Roman De Re Militari advocate denying the enemy food. The current conflict in Sudan provides a case in point in the cynical application of age-old wisdom. There, the government intensifies bombing in rebel areas at harvest time, destroying food. In turn, the country’s rebels seize humanitarian food supplies intended for refugees.

Of course, food is integral to grand strategy and imperial expansion. Food deprivation underlay Napoleon’s ‘Continental System’ trade blockade against the United Kingdom. Similarly, demand for sugar in eighteenth and nineteenth century Europe was the sweetener for imperial expansion; thus, the most powerful expression of liberty by Haiti’s slave population was the mass burning of the sugar plantations during that country’s late eighteenth century revolt.

As the nineteenth century progressed, creole elites in Latin America pursued war against their former colonial masters and in the process cemented their control over the continent’s sugar and cacao plantations. With urbanisation and industrialisation, British demand for ‘a nice cup of tea’ played its part in episodes of nineteenth century imperialism such as the ‘Opium Wars’.

Industrial food processing soon accompanied industrial warfare. During the American Civil War, the Confederacy grew relatively thin while Northern canning operations (most famously Gail Borden’s condensed milk plants) kept Ulysses S. Grant’s army strong. A generation later, control over sugar plantations would again factor in America’s wars with Spain.

Forcing hunger on the enemy, still used as a weapon of political control by dictators in the developing world today, motivated such diverse acts of war as the British campaigns against the Boer republics; the blockade of Germany during the First World War; Stalin’s ‘terror famine’ in the Ukraine 1930–33; and the Nazis’ ‘Hunger Plan’ in the Soviet territories during the Second World War.

With a brutal logic, grand strategy often trumps humanitarian concern when it comes to food supply. Ultimately, Winston Churchill could stomach the starvation of three million Bengalis in 1943 because famine was counterbalanced by strategic advantage. Meanwhile, in today’s post-Cold War context, the debate over whether American food aid is an imperialist weapon or a philanthropic vehicle rages on.

Progress amid destruction

One great result of the French Revolution was the removal of Parisian chefs’ aristocratic patrons, prompting them to set up their own restaurants and establishing the culinary primacy of the Parisian bistro. One of these chefs, Nicolas Appert, began experimenting with ways to preserve foodstuffs. Once again, war drove innovation. In 1795 the French military offered a lucrative cash prize for a new method to preserve food. Appert developed a method for sterilisation of food by heat. Vegetables and other foodstuffs could now be preserved in cans so that they tasted almost fresh.

T.E. Lawrence, who survived on little more than bread dough moistened with butter during the Arab revolt of the First World War, understood the importance of food in conflict better than most. The invention of canning, according to Lawrence, was more important in the history of warfare than the machine gun.

Adhering to the dictum attributed to his great rival, the Duke of Wellington always ensured his troops were well provisioned. He also appreciated the role of booze as a fillip to morale. Alcoholic consumption has been cited as part of the European divide and even as a factor in the outcome of the continent’s most decisive battle. ‘It was wine and beer that clashed at Waterloo’ writes a French historian. ‘The red fury of wine repeatedly washed in vain against the immovable wall of the sons of beer’. In fact, beer consumption in Britain fell steadily between 1800 and 1850 and, on the morning of Waterloo, Wellington had fortified British soldiers with rum, not beer, while his Prussian colleague, Blücher, prepared for the battle by bathing in brandy and eating garlic-flavoured rhubarb washed down with schnapps.

A satisfactory combat diet, as both Napoleon and Wellington appreciated, is essential to the upkeep of morale when surrounded by death. This helps to explain a disturbing tale from the Crimean War. Aboard a Royal Navy ship in Balchik Bay, Fanny Duberley wrote of the agonising death of an officer from cholera. ‘During his death struggle the party dined in the saloon, separated from the ghastly wrangle by a screen … with few exceptions, the dinner was a silent one; but presently the champagne corks flew.’

Throughout modern history, the social and economic pressures of war have driven innovation in foodstuffs. Shortly after the end of the American civil war, for instance, French emperor Napoleon III announced a competition to develop a healthy, preservable new fat for consumption by the expanding army and navy. Margarine was born: made from rendered beef emulsified with water.

In the twentieth century the symbiosis between nutritional progress and martial destruction continued. In 1909 German chemist Fritz Haber synthesised ammonia, an innovation which greatly improved soil fertility and therefore food yields. But Haber would also apply his scientific talents to developing chemical weapons during the First World War, prompting his wife to shoot herself dead in protest.

As George Orwell later noted, ‘the Great War could not have happened without tinned food’. Neither can that war’s successor, the quintessential total war, be understood without reference to the advances in food preservation that sustained millions, on the one hand, and the twenty million deaths through the effects of starvation, on the other.

Absence versus abundance

It takes several hours to arrive and when it does it’s cold and soggy, but Kentucky Fried Chicken is currently in demand among some residents of the Gaza Strip. The walls and fences around the Strip, part of the Israeli blockade, are circumvented by the many black market tunnels running between Gaza and Egypt. These tunnels not only provided a channel for nutritious food to enter the territory, but satisfy some people’s longing for globalised junk food unavailable in Gaza.

Absence and abundance of food, whether material or rhetorical, typifies wartime situations.

Enduring an absence of desirable food, as in Gaza, drives not only smuggling but also aggression towards one’s foe. During China’s Civil War, Mao’s troops won much support through abstaining from the plunder of food and helping peasants with the rice harvest. Similarly, depictions of the enemy’s gluttonous abundance are a wartime propaganda staple. The journalist Robert Fisk recalls outraged Iranian soldiers showing him fridges full of beer and food in trenches captured from the Iraqis during the Persian Gulf War of the 1980s.

Likewise, the history of conflict is replete with propaganda depicting enemy life as one of culinary deprivation. White settlers in the American West pointed to the ‘backwardness’ of native Americans’ subsistence-level economies, which exposed them regularly to the threat of starvation. As witnessed in the famous Kitchen debates between Nixon and Khrushchev, during the Cold War Americans marketed civilisation as food abundance and choice. Hence Ronald Reagan’s taunt to Fidel Castro: ‘Peanuts, popcorn, cracker jacks!’

It is that very absence of food during prolonged conflicts – exacerbated by wartime price inflation – which forces soldiers and civilians alike to consume substitute foodstuffs, providing some fascinating studies in human behaviour. Tolstoy’s War and Peace has hungry Russian soldiers happily consuming a sweet root which caused ‘swelling of the arms, legs and face’. Giuseppe Garibaldi’s grandson, fighting in the Greco-Turkish war in 1897, recalled ravenously hungry Red Shirt officers’ faces turning from happiness to horror after swallowing eggs whole: the eggs contained half-formed chicks.

And then there is cannibalism. Survivors of the siege of Leningrad, 1941–44, described being able to easily spot a cannibal. Among the emaciated city dwellers, eking out an existence on rat meat or whatever else they could scavenge, there walked well-fed and healthy-looking people with ‘tender pink faces’ and ‘splendid bright blue eyes’.

Climate and harvest: lessons for today

The historic link between climate change and conflict is disputed and varies with different crops. Nonetheless, the fact remains that most of the world’s population rely on small-scale agriculture, which remains as vulnerable as ever to climatic shocks and fluctuations. Furthermore, the expansion of biofuels has resulted in higher food prices and, consequently, the heightened prospect of conflict, whether civil or state-to-state.

In 2008 an archetypal historical problem returned. The food riot – with its attendant notions of moral economy – was witnessed across the globe. In that year rioting in 48 countries was a consequence of bad harvests and a rise in the price of staple food. In Haiti, with angry crowds demanding cheaper rice, the country’s prime minister was forced to resign. Similarly, with food price volatility linked to the state’s plan to lease arable land to foreign concerns, the President of Madagascar was forced aside. In 2012, droughts again ensured a spike in world cereal prices and exacerbated ongoing conflicts in central Africa and Syria. Right now, in places like Mali, the Democratic Republic of Congo, and Syria, refugee crises are compounding food crises. Hence, there are heart-rending stories of Syrian refugee children eating weeds in order to survive.

Despite improvements in scientific and humanitarian efforts to address the problem, it seems that the conflict/hunger nexus remains wearyingly well established. As we ponder the role of resource competition in stoking future conflicts, this provides unsettling food for thought.

Further Reading:

Frank Trentmann and Fleming Just eds., Food and Conflict in Europe in the Age of the Two World Wars (Palgrave Macmillan, 2006)

Ina Zweiniger-Bargielowska, Rachel Duffett & Alain Drouard eds., Food and War in Twentieth Century Europe (Ashgate, 2011)

Lizzie Collingham, The Taste of War: World War Two and the Battle for Food (Allen Lane, 2011)

Rachel Duffett, The Stomach for Fighting: Food and the Soldiers of The Great War (Manchester University Press, 2012)

Food and the nineteenth-century British stomach

Ian Miller, Irish Research Council Postdoctoral Fellow, Centre for the History of Medicine in Ireland, University College Dublin

Stomachs have historical importance. The Victorians were obsessed with indigestion, the workings of the digestive system and the diverse food cultures that emerged as Britain industrialised and colonised. Accordingly, they produced a wealth of popular and medical literature on stomachs, digestion and diet. Perhaps the most memorable of these was an 1853 health tract penned by an obscure author named Sydney Whiting entitled Memoirs of a Stomach. This short book proved immensely popular. It ran into various editions and was even translated into French in 1888.

On the surface, Memoirs of a Stomach might appear to be an unusual choice of reading material for such an extensive audience, given that the main protagonist is a remarkably literate stomach, named Mr Stomach, who details the misery of his long life to the reader in painstaking detail. The organ begins by describing how his ancestry dates back to the invasion of the Saxons when the great Sir Hugh Stomach was created baron due to the huge quantities of beef that he was able to digest. Sadly, it is explained that Mr Stomach’s mother died soon after giving birth to him, ‘joining the stomachs of another sphere’. The consequence was the commencement of a life of poor health, prompted by the organ’s owner being breast-fed by a London woman whose milk was contaminated due to her over-indulgence in liquor and porter.

In his youth, so Mr Stomach complains, he was forced to digest adulterated flour, sweetmeats, oysters and tobacco smoke; foodstuffs not well-suited to his delicate constitution.  At college, the organ’s ‘master’ consumed long breakfasts that lasted until noon, during which masses of food from around the world would be poured into his cavity. It is at this point that dyspepsia (or indigestion) struck for the first time. Recovery ensued. However, shortly afterwards, the organ’s hapless owner fell in love. Mr Stomach bitterly recalled his master’s new-found habit of singing loudly, lamenting that he was ‘constantly being woke up in the night, and found myself either walked up and down the room, the maniac repeating love ditties’. The honeymoon proved to be an even more traumatic experience for the unfortunate stomach as the continental foods consumed by his ‘master’ played havoc with his health. Eventually, his ‘master’ secured employment in a well-paid city job that provided him with the financial resources to indulge excessively in alcohol, much to Mr Stomach’s disgrace.

The nervous stomach

Memoirs of a Stomach illustrates the organ’s pivotal positioning in Victorian constructions of the unhealthy body. During the late eighteenth century, new conceptualisations of the workings of the inner body had invested the stomach with enhanced significance. A growing acceptance of the bodily importance of the nervous system precipitated shifts in understandings of the interrelationship between different body organs. Edinburgh physician Robert Whytt was a central figure in this process. Whytt had developed the concept of ‘nervous sympathy’ to describe how different bodily parts interacted with one another. He depicted a bodily system rich with links between organs, all connected by the nervous system. He then applied the concept of ‘nervous sympathy’ to explain how pain or discomfort might be felt in organs far away from the initial seat of disease.

In these models, the stomach was prioritised as a key site of nervous energy. During the nineteenth stomach, the stomach can be found being described as ‘a focus of vitality – the centre of a department in which the living principle is most abundant and exquisite’, the ‘foundation or root of the complex apparatus’, the ‘great nervous centre or sensorium of organic life’ and even as the ‘great abdominal brain’. Contemporaneously, biologists speculated that the digestive processes were a central facet of all organic life. One popular theory suggested that as the lowest forms of life had no sense, pulse or motion, they were essentially animated stomachs. Digestion seemed to be their sole faculty. The sponge provided an illustrative example of this; a creature that appeared to consist almost entirely of minute pores, described by one author as ‘many little mouths, which perpetually suck in the sea-water, and the animalcules with which it abounds’. The sponge was one of the most primitive forms of life, and it appeared to do little else with its life but digest. It was, essentially, a swimming stomach. This encouraged conclusions to be drawn about the stomach being the most basic, and therefore the most important, of all human organs.

The most extreme proponent of views such as these was John Abernethy, a highly influential anatomy teacher and surgeon. Abernethy campaigned tirelessly for wider recognition of the bodily significance of the stomach, its illnesses and the distressing consequences of gastric sympathy. His influence was such that later on in the century, The Observer claimed that Abernethy’s development of concepts that all bodily disease was traceable to gastric derangement was one of the greatest services ever rendered to mankind.

Although few physicians were quite as obsessed with the stomach as Abernethy, a broad consensus existed on the organ’s physiological significance. In the 1850s, one Exeter physician, Dr Lamb, claimed to be able to relieve the symptoms of tuberculosis via the correction of digestive faults. James Johnson, famed editor of the Medico-Chirurgical Review, suggested that irregularity of the heart was a common consequence of stomach disorder. Sensory dysfunction was also regularly attributed to gastric sympathy with dimness or cloudiness of sight being regularly perceived as a direct result of a disordered stomach. Hypochondria was regularly perceived as a particularly critical outcome of the nervous sympathy between stomach and brain, leading patients to suffer from morbid mental diseases and to be tormented by imaginary sense of pains or disease and turn wearisome or apprehensive of life.

The stomach and British society

Worryingly, stomach problems seemed to be rife in nineteenth-century Britain. In virtually all nineteenth-century literature on digestion, gastric complaints were framed as problematic not only for the unfortunate individual crippled by abdominal pains, but also for British national health and progression. Advice given on digestion rarely dissociated these themes. In 1826, the Medico-Chirurgical Review stated that ‘there is no complaint more common in this country than an imperfect condition of the stomach’. Twelve years later, the Dublin Journal of Medical Science stressed that ‘stomach diseases are of every day occurrence; they form the national malady of Britain, and consequently the prime staple of the medical art’. Throughout the early 1850s, advertisements for Jones’ Tremadoes Pills announced that indigestion was the ‘prevailing evil of the human frame, and the fashionable disease of the age’. As late as 1886, adverts for Seigel’s Syrup declared that ‘the national disease of this country is indigestion’. The so-called ‘demons of dyspepsia’ were also immortalised in etchings by George Cruikshank. In the nineteenth century, gastric illness was presented as far more than just a physical concern for the individual to conquer. It seemed to pose a communal threat.

Concepts such as these also permeated non-medical literature. An article published in Blackwood’s Edinburgh Review in 1861 dramatised the predicament to such an extent that it  claimed that not only was England the country most liable to gastric conditions, but also that whilst labouring under such attacks, ‘nothing but family considerations prevented him [the Englishman] from blowing out his brains with a pistol, or effectually ridding himself of his woes by plunging into the muddy torrent of the Thames’. The author went so far as to speculate that only a fraction of the dyspeptic British had the courage to abstain from self destruction during the gloomy months of November and December, a period when multitudes of corpses of sufferers from crippling gastric diseases would supposedly be swept across the nation’s rivers.

What had caused this unfortunate situation? What factors had rendered the British so susceptible to chronic gastric problems? Some medical authors blamed rising levels of nervousness on Britain having advanced, modernised and become civilised. For some, the nervous structure of man’s natural body had failed to adapt in line with the quickening pace of modern life. A national mass of stomachs and nervous systems had yet to adjust to the requirements imposed by modernity. The modern urbanised British individual needed to somehow establish a way of maintaining the health of his/her body in accordance with nature’s requirements, even if he or she happened to be situated in a scenario where the body’s nervous structure had not yet adapted.

Contemporary literature on digestion explicitly envisioned the urban environment as an artificial obstacle which the natural stomach had to negotiate its way through in order to ensure full bodily health. Rural life was romantically constructed as a healthy norm while urban living was depicted as its antithesis: an unnatural departure that threatened to incur numerous health risks for its residents. In particular, the city was seen as bringing individuals  face-to-face with a vast array of predisposing causes with the potential to trigger an irreversible decline in gastric health; a process likely to commence with illness of the stomach and which threatened to spread throughout the system via the complex entity of nervous sympathy.

In 1830, The Times noted that dyspepsia was highly prevalent in the factory districts and that, via nervous sympathy, indigestion had resulted in the loss of teeth among many young women residing in these areas. The business classes, too, were seen as liable to dyspeptic attacks. These professionals were viewed as entangled in the constant whirl and rush of business, catching a mid-day meal only if they had time and being in the habit of drinking stimulants in order to set them up for work. These habits were especially notable at times when they were most harassed and worried about business. The physical nature of office work was also postulated as a likely cause of dyspepsia. Those employed in offices were considered at risk of damaging their stomachs due to the stooping position in which they wrote, a posture believed to mechanically interfere with the stomach’s various actions.

Corsets, too, were blamed. Reports of female deaths reported that the effects of the corset’s tight grip around the stomach combined with a crowded room, mental emotions and a dance had caused ladies to faint and even, in some instances, die. Gastric problems appeared to have infiltrated British society to such an extent that they now posed a threat to the health of all sections of society from the working-class factory girl to those fortunate enough to be living in a state of luxury. The threat of chronic dyspepsia now seemed to be everywhere.

Eating the right food

As Britain’s advance towards urbanity and civilisation appeared unlikely to halt or reverse, a plausible approach to resolving the problem of communal gastric illness was to educate the public on how best to navigate modern conditions by paying close attention to food intake and digestion. Dispensing dietary education became perceived as a paramount activity. The main concerns of literature on digestion were the quantities of food that stomachs could naturally digest; appropriate timing and distributions of meals; and moderating the varieties of food consumed. According to Abernethy, modern man was eating and drinking an enormous amount more than was necessary for his natural wants. He was filling his stomach and bowels with putrefying food; the elements of which would then be dispensed throughout the body via the nervous system. Accordingly, Abernethy advised moderation in food intake.

Appropriate meal distribution was presented as equally important. For many critical observers, urban life had disrupted man’s natural eating schedule. The removal of men from the countryside to the town seemed to have encouraged a postponement of dinner until five o’clock, if not later, in urban areas, a practice deemed to be ruining the health of thousands of city dwellers. One exception to this rule was Manchester, a city whose communal practice of dining at one was identified as being particularly beneficial for urban health, prompting claims that it should be introduced in every British town and city. This encouraged one anonymous Mancunian to assert that:

A Manchester man is never drowsy after dinner; he does not sink to the level of a boa constrictor, and indulge in a cozy, sulky snooze after eating; his motto is simper vigilans – wide awake; he knows nothing of dream-land; he cares nothing about fairy visions. He positively jumps up after eating a pound of beef-steak, and goes to his ledger as if nothing had happened. The Manchester stomach is sui generis; it is no more embarrassed by feeding than a steam boiler. O dura mercatorum ilia!

The increased variety of food available in Britain also caused concern. Modern life had encouraged global communication; one important consequence of this being an increase in the number of foods readily available for consumption. A belief existed that natural food supplies had been placed around the world that were best suited to those who lived there. New transport possibilities including canals, railways and boats were feared to have brought a highly refined and varied cuisine to all sections of British society, which the stomach was expected to digest. Dietary articles imported from entirely opposite climates were now being consumed in one single meal. The availability of such a complex diet appeared particularly troubling as it encouraged a radical departure from the needs of the average British constitution. As a warning, various physicians stressed that while most animals ate monotonously, urban man persistently defied the laws of nature by scarcely eating two similar meals in one day alone.

What, then, was considered to be the most appropriate natural diet for the British stomach? Esteemed physician Thomas Lauder Brunton made a lot out of the subject of prehistoric strawberries as the seeds of these had been found in a body believed to date from prehistoric Britain. In his view, it must therefore be a good source of food. Generally speaking, however, authors providing advice on digestive health eagerly expressed views on what substances were not good for the stomach rather than what were. Broadly speaking, it was condiments such as salt, vinegar, sauces and spices that were deemed as the most harmful to the British stomach, alongside stimulants including tea, coffee and alcohol. The lower animals were observed to never consume these. Modern human consumption seemed to complicate the natural work intended of the digestive organs as it represented a radical departure from the physiological needs of the animal world.

Stimulants including tea and coffee were also routinely targeted. Their introduction was seen to have produced an important change in the customs of European nations which, according to some, in fact constituted a profound revolution in dietetics. Tellingly, the widespread habit of tea drinking was seen as especially problematic in regions of Ireland, and attracted considerable comment in the late nineteenth century when a tea epidemic apparently ravaged parts of the west of Ireland. Alcohol was also frequently denounced in literature on stomachs and digestion. Some anti-alcohol advocates attempted to impress the negative effects of intemperance on the audience attending their meetings by bringing along preserved, dissected stomachs. One prominent temperance advocate, Dr Sirder, to the apparent amusement of the audience, inflated one of these stomachs with his breath whilst speaking at a public meeting, and then asked his onlookers if they thought it was possible to put a pound of rump steaks and four pots of beer into such a small cupboard as that. Evidently, nineteenth-century Britain witnessed an obsession with the stomach and digestion that produced a pervasive moralistic discourse on food and diet. Medical authors were particularly intrigued by questions of how man’s natural body was to adjust to modernisation, industrialisation and civilising processes. Maintaining a natural diet was upheld as a solution to this problem.

See also: I. Miller, A Modern History of the Stomach: Gastric Illness, Medicine and British Society, 1800-1950 (London: Pickering and Chatto, 2011).

Food in history: ingredients in search of a recipe?

Sara Pennell, senior lecturer in early modern British history, University of Roehampton

In 1660, the cook Robert May published recipes for one of the modish dishes of the day: the grand ‘sallet’ or salad. The Stuart ‘sallet’ was a spectacle, with its carefully arrayed mixture of fresh and preserved elements, and imported commodities (anchovies, ‘Virginia Potato’, almonds) alongside indigenous ingredients (mushrooms, samphire). Sallets – the first dish to be given exclusive focus in any English-language food text, in John Evelyn’s 1699 Acetaria – were dietetically fashioned to soothe and stimulate in equal measure.[1]

The current state of historical scholarship about, and of, food is arguably like a grand sallet. There is much that stimulates, yet also much familiar to the palate. The array of research ‘ingredients’ is diverse, and, on occasion, exotic, and yet the sense in which these ingredients come together to make a coherent ‘dish’, or area of shared theorisation and methodological harmony, might take some chewing over.

Joachim Beuckelaer (1533-73) seems to have been the first painter to depict fish stalls. Joachim Beuckelaer, The Fish Market, 1568, Musée des beaux-arts de Strasbourg (public domain via Wikimedia Commons).

The first challenge is to establish what food history/food in history comprises. These word order and prepositional changes are significant, signalling shifts in scope and scale. Food history has (perhaps unfairly) a reputation of being exactly that: explorations of foodstuffs – their cultivation, preparation and consumption – in historical perspective. Limited though that may sound, such focus has produced everything from single-ingredient/commodity histories with global intent (Mark Kurlansky’s 1997 ‘biography’ of Cod to the global histories of potatoes, cake and whiskey in the Reaktion Books ‘Edible’ series),[2] to evocative accounts of feasting and many, many modern editions of historic ‘cookbooks’ (which, more properly, are recipe collections).

What lacks in some food history is attention to the historical agency of food, its relegation to a table-dressing role. Paraphrasing the title of B. W. Higman’s 2011 book, doesn’t food make history happen?[3] This issue of scale of approach in studying food historically – from the panoramic sweep of Felipe Fernandez Armesto’s 2001 Food: a History to the micro-historical, with pretensions to macro-historical, significance (can a cookbook really ‘change the world’?) – is a vexed one.[4] Too small and the tendency towards antiquarianism is apparent; too large, and the gallop from caveman’s fire to induction hob tends towards whiggishness amongst the wiggs[5] and a sense that what we eat now is inevitably healthier/more diverse/less exploitatively gained than what we ate ‘then’. The intellectual queasiness this might induce in us all is now being further fed by accounts of post-industrial, globalised food insecurity from the likes of Michael Pollan and Joanna Blythman, as well as Slow Food activism worldwide.

By changing the preposition – food in history – do we indicate that, rather than simply focusing on the foods and their preparation, we choose a more elevated investigation into what anthropologists call the ‘foodways’ of the past: the processes, flows and impacts of food in economic, social, cultural, political and environmental/ecological contexts?  This tension between the food itself and the processes in which it is implicated (from raw to cooked, from agricultural production, through to industrial synthesis, from local to global and back) might explain why food has yet to join ‘gender’ or ‘class’ as a fully-paid-up category of historical analysis.

Looking to the relative scarcity of academic gatherings about ‘food in history’ in Britain until this century (with the honourable exception of the annual Oxford Symposium of Food, established 1981, and the Leeds Symposium on Food and Cookery, established 1985), it is clear that food as a historical theme in and about Britain has been a niche pursuit, and still mostly invisible in the undergraduate curriculum (with honourable exceptions again: see the courses currently on offer at York and Cambridge, for example). By comparison, European, north American and Australian universities have developed entire degree programmes, courses and research centres around the historical study of food, for example Boston University’s gastronomy programme, the University of Adelaide’s Centre for the History of Food and Drink, and IEHCA (l’Institut Européen d’Histoire et des Cultures de l’Alimentation at the Université François-Rabelais de Tours).

Such mainly non-British outputs map the differential impact of certain historiographical and theoretical traditions in which food/foodways have carried explanatory power. The Annales school – with Fernand Braudel’s uncharacteristically romantic statement that ‘the mere smell of cooking can evoke a whole civilisation’[6] its tagline – fuelled extensive and ongoing French and Italian scholarship, in the work of Jean-Louis Flandrin, Bruno Laurioux, Massimo Montanari and many others. In North America, cross-disciplinary currents between history and archaeology underpinned the influential focus on seventeenth, eighteenth and nineteenth-century ‘foodways’ that has now extended to Australian and South African historical archaeologies.

The most abundant, and oft-replenished source for food history/food in history in Britain is popular print, radio, television and now the internet.[7] Since last autumn I have watched Clarissa Dixon-Wright tackle ‘Breakfast, Lunch and Dinner’ in historical perspective (BBC4, autumn 2012), read William Sitwell’s A History of Food in One Hundred Recipes (Harper Collins, 2011), and received Bee Wilson’s Consider the Fork: a History of Invention in the Kitchen (Particular Books, 2012) for Christmas. And these are only the cherries decorating a much larger cake. Alongside them, energetic heritage engagement with ‘life below stairs’ serves forth a feast of food re-enactment, or ‘experimental archaeology’, depending on your point of view. Hampton Court Palace’s Tudor kitchens draw large audiences on the ‘live’ cooking weekends, while the historical ‘inserts’ in The Great British Bake-Off (BBC2, 2010-present) leaven the melodrama of collapsing sponges with snippets about baking BMB (Before Mary Berry). That serious, seminal scholarly research underpins some of this output is not in question – leading lights in this field, Ivan Day and Peter Brears (significantly freelancers both) effectively created it – but questions about authenticity remain. Modern food industrialisation, commercialisation and technologies of preparation, as well as health and safety and visitor engagement agendas, conspire to make these routes into food history no less fraught with interpretational problems than text-based scholarship.

Recipes are not the only food and drink ‘texts’ that can be interrogated, as this 1860 advertisement for Bourbon shows. La Sylphide Bourbon, A.M. Bininger & Co. Bourbon advertising label in the shape of a glass showing a man pursuing three sylphs. © Rufus Watles & L.C. Sanger, lithograph by Sarony, Major & Knapp, New York (public domain via Wikimedia Commons).

Sources for historical investigation of food are of course crucially problematic. The history of food is NOT in the ‘recipes if we but had them all’, as the pioneering independent food historian Karen Hess once proclaimed. Yet, while food texts are more diverse than simply recipes – everything from state papers to chip paper ephemera – the recipe text still enchants, continuing a pedigree of food history research that launched with early antiquarian interest in the ‘ancient’, medieval texts of elite food ordinances and recipes.[8] But, as 2013’s Oxford Symposium of Food acknowledges in its theme of ‘Food and Material Culture’, we do need also to ‘consider the fork’…and the hearth, the cooking pot, the table, the kitchen and the restaurant dining room, as necessary material routes into more nuanced accounts of food in whatever history we seek to write. Material food history is rather fashionable right now (see Bee Wilson’s book), but let us not be blind to its limitations, especially the reading off of food practices and more problematically still, tastes, from proxy objects: how many of us have that unused juicer in a cupboard, from which future historians might read off our non-existent juice obsession?

More of a challenge still is the ephemerality (more or less) of what goes in the pot or on the plate: a challenge that only archaeology can confront for the distant and not-so distant past, before film and photography as documentary sources. The integration and interrogation of archaeobotanical, zooarchaeological and palaeographic deposits for the historic era with text and artefactual data is a crucial development, allowing scholars to test how prescriptive food texts (say, cookbooks or public health guidelines) are borne out in practical bodily and waste matter. Exploratory work in this area has tended to be multi-disciplinary, but there are some exciting interdisciplinary possibilities in this area, and interdisciplinarity is arguably what will produce a richer vein of ‘food in history’ research.[9]

Yet, if archaeological techniques and data are providing the piquant ‘new’ in our grand sallet, other themes are more familiar. My recent reading of the sadly-neglected children’s tale by André Maurois, Fattypuffs and Thinifers (first published in French in 1930) has shaped what follows. For those of you unfamiliar with the book, the Fattypuffs are cheerfully obese, life-embracing constant eaters, whose mantra is ‘one must live to eat’; the Thinifers rule-bound, ruler-thin workaholics, mouthing ‘one must eat to live, not live to eat’ before each sparse repast.[10] These two tribes wage Swiftian war for territorial supremacy.

Recognisably ‘Thinifer’ and ‘Fattypuff’ tendencies are not difficult to identify in current ‘food in history’. Tending towards economic and ‘techno-physical’ topics, ‘Thinifer’ histories focus on food supply systems and their resilience (or food security studies: for example Frank Dikkötter’s 2010 Mao’s Great Famine) and histories of the corporeal consequences of dietary (in)sufficiency.[11] These studies have historicised concerns of modern economists, politicians and ecologists with what is robust and what is less so in contemporary food systems. Slightly less ‘Thinifer’ in tone, work inspired by E.P. Thompson, while not necessarily being particularly concerned with the food itself, has reached beyond the purely quantitative, to think about the social and moral ramifications of food shortage and food entitlement. This produces histories – like those of John Walter for early modern England – that not only enrich the notion of food and access to it as ‘a system’, but one in which individuals and communities, as much as states, supply lines and global commodification, have agency.

The emergence of the history of medicine as not only a sub-discipline within academic history, but as a well-funded area of research, thanks to the Wellcome Trust, has also brought seemingly Thinifer concerns – diet, nutrition and health – front-stage. But understanding modern challenges to dietary equilibrium and nutritional equality (a current Wellcome Trust priority) is not simply about physiological and biomedical issues. Cultural dispositions of communities to particular tastes, food customs and ideas of corporeal wellbeing are historically contingent, as well as often resistant to authoritative and public health agendas, as Keir Waddington has recently shown for the Victorian sausage.[12]

Here we are edging into ‘Fattypuff’ territory. One of the consequences of the late twentieth-century cultural ‘turn’ saw foodways emerging as a cultural player, from the ‘civilising’ of behaviours and collective identities around food (in Stephen Mennell’s seminal historical sociology) to literary engagements with food, to the roles played by food-in-space, for example the later Stuart coffee house or the Revolutionary French restaurant.[13] But, like the Rabelaisian Fattypuffs, cultural histories of food have recently been all-consuming: as in so many other areas of historical research, everything (economics, diet, ecologies) feeds the cultural ‘stomach’. A case in point is the recent series from Berg. While each volume covers food systems, food security and ‘body and soul’, they are nevertheless marketed under the general series title A Cultural History of Food, a decision that might narrow readership and does not reflect the different methodological and theoretical standpoints of contributors.[14]

So what happens between the Fattypuffs and Thinifers? Although the latter quickly overrun the Fattypuff realm, deposing king and government, the colonisers and colonised undergo mutual accommodation. Fattypuffs see the virtues of eating less (but not all slim down), while Thinifers realise that functionalist eating may not be the only way to flourish. Historians interested in ‘food in history’ likewise need to be open minded about what methodological tools and sources to deploy in using the study of foodways to answer some of our larger historical questions: questions about dietary (and thus physiological and ecological) change and adaption; food and thus geopolitical security, on the ground as well as at policy level; and cultural formation of individuals as well as of states, nations and civilisations. We need historical vantage points that complicate approaches assuming shared knowledge, shared experiences, and indeed shared tastes. These vantage points in future might need to be as much ‘glocal’ in scale and tone, as they are either now local or global and with many more ingredients, combined in unexpected ways, than even the grandest of Robert May’s sallets would admit.

[1] Robert May, The Accomplisht Cook, or, The Art and Mystery of Cookery (London, 1660), pp. 158-65. J[ohn] E[velyn], Acetaria. A Discourse of Sallets(London, 1699).

[2] www.reaktionbooks.co.uk/series.html?id=19 (accessed 24/1/2013).

[3] B. W. Higman, How Food Made History (Wiley-Blackwell: 2011).

[4] T. Sarah Peterson, The Cookbook That Changed the World: the Invention of Modern Cuisine (Tempus, 2006).

[5] A yeasted bun appearing in later Stuart and Georgian recipe collections.

[6] The Structures of Everyday Life (first published in French 1979; this edition University of California Press, 1992), p. 64.

[7] See Ken Albala’s ‘Food Rant’, at kenalbala.blogspot.co.uk; and Ivan Day’s ‘Food History Jottings’ at foodhistorjottings.blogspot.co.uk (both accessed 24/1/2013).

[8] Gilly Lehmann, The British Housewife: Cookery Books, Cooking and Society in Eighteenth-Century England (Prospect Books, 2003) and Janet Theophano, Eat My Words: Reading Women’s Lives Through the Cookbooks They Wrote (Palgrave, 2002).

[9] C. M. Woolgar, D. Serjeantson and T. Waldron (eds), Food in Medieval England: Diet and Nutrition (OUP, 2006), esp. pp. 1-8.

[10] A Ciceronian epigram popularised in Jean Baptiste Molière, The Miser, act 3, sc. 1 (1669).

[11] E.g. Craig Muldrew, Food, Energy and the Creation of Industriousness: Work and Material Culture in Agrarian England, 1550-1780 (CUP, 2011).

[12] Keir Waddington, ‘The dangerous sausage: diet, meat and disease in Victorian and Edwardian Britain’, Cultural and Social History, 8:1 (2011), 51–71.

[13] Stephen Mennell, All Manners of Food: Eating and Taste in England and France from the Middle Ages to the Present (Blackwell, 1985); Joan Fitzpatrick, Food in Shakespeare (Ashgate: 2007); Rebecca Spang, The Invention of the Restaurant: Paris and Modern Gastronomic Culture (Harvard, 2000).

Pizza, pasta and red sauce: Italian or American?

Donna R. Gabaccia, University of Minnesota

Food travels. I once watched an Italian colleague cringe when a Korean friend suggested that children in his country wanted to eat American food such as pizza. As a historian of migration, food and society, I know that neither colleague was unjustified in his reactions. What the world today knows as pizza is the product of a long history of changing connections between Italy and the Americas and between both countries and the wider world. It is a history of travel, tourism, migration, agriculture, industry, commerce and creativity in the kitchen.

People crossing borders carry along the tastes and sometimes also the seeds, recipes and ingredients of their homes. Mobility requires, inspires and facilitates commerce in familiar and exotic goods. Similarly, travel inspires a cook to experiment, to borrow and to adapt. It is extraordinarily difficult to apply national labels to people and products in a mobile world. Nevertheless, in a world of national states and national loyalties, we persist in doing just that. Like the humans who created and carried them, spaghetti with red sauce, peppers, polenta, zucchini and pizza with peppers or tomatoes have rarely moved in one direction only.

‘Italianizing’ the foods of the Americas, 1500-1900

Scholars have called the joining of old and new worlds the ‘Columbian exchange’. The spread of American crops transformed foodways worldwide. It was probably hunger more than curiosity that motivated Asians and Europeans to experiment and to adopt these new foods.

Certainly in Italy, peasants had long been accustomed to healthy if frugal foodways, known as la cucina povera. In 1500 the Americas had not yet enriched the triumvirate of classical Mediterranean cuisine – wheat, wine and olive oil – with its own three sisters (corn, beans and squash) or with their close cousins, the tomato and the pepper. Of course the poor of Italy did not in any case eat much wheat or wine. Their breads and porridges were of lesser grains, including those such as faro, chestnut meal and buckwheat that have been discovered again recently by Italian ‘slow food’ advocates. Wild and cultivated greens, oil, onions, salt, cheese, sausage or fish flavoured such staples, with wine, wheat bread meats and sweets as occasional, festival treats.

It took many centuries before the consequences of the Columbian exchange became apparent in Italy. Many of the crops of the new world grew at first only as oddities in botanical gardens. They grew also on the little ‘handkerchiefs’ of land that peasants cultivated to sustain themselves. By the late eighteenth century, the cultivation of American crops in Italy was both sufficiently extensive and so uneven that American crops were beginning to accentuate in new ways the regional differences of agriculture and taste characteristic of all peasant societies.

Soon after the formation of Italy’s national state in 1861, an agrarian inquest surveyed and catalogued the crops and foodways of the nation’s peasants. It established that almost everywhere in Italy the poor ate corn, potatoes and beans but prepared them with regionally diverse techniques developed on more ancient grains and pulses. In the north, corn became polenta; in the south, it was an element in bread, including the flatbreads called pizze oru’bizz. Occasionally, as in Naples, corn was prepared in American ways – for example, roasted and sold by female street vendors. Just as Columbus thought he had landed in the Indies, eaters in Italy knew corn as granoturco, which may have referred to its reddish colour but more likely suggested that the grain had entered Italy from Asia (‘Turkey’). Because few American cooks had travelled to Italy, corn-eating peasants did not learn the nutritional secrets of American hominy and developed dietary deficiencies.

The regional adoption of other American foods such as tomatoes, peppers, cactus fruit and zucchini revealed the long-term consequences of Italy’s central place in the Spanish empire. Peasants living around Genoa and Elba, in the north west, and in Naples and Sicily, in the south, were most likely to grow and eat tomatoes. Sicily and much of Italy’s south was ruled by Aragon from the fifteenth until the early eighteenth century. During these centuries, the sailors and merchants of the independent city state of Genoa, including Christopher Columbus, were not only explorers in Spanish employ but the main organizers of its imperial commerce and trade.

Naples, the capital city of the formerly Spanish provinces of southern Italy, played an especially central role in the invention of two dishes that would soon travel the world. Formerly known for its impoverished vegetarian ‘leaf-eaters’ (mangiafoglie), Naples in the late eighteenth century gained a reputation, spread throughout Europe by curious tourists, as the home of macaroni eating. Street markets featured young men and boys who ritualistically and dramatically consumed long noodles dressed only with a grating of cheese. Similarly, vendors wandered the street with portable tables displaying small bread rounds topped with oil or onion. Almost simultaneously, in the 1830s, tourists and travellers began to report finding both traditional dishes regularly topped with tomatoes and with tomato sauce. Pizza and spaghetti with red sauce soon became symbols of the Neapolitan lively plebeian port culture.

Mass migration; mass production; mass consumption, 1870-1920

Between 1870 and 1970, over 26 million individuals departed from Italy as migrants, typically in search of work. The residents of Italy had long been poor, but only in the late nineteenth century did their poverty motivate them to travel such long distances.

Among the changes that set a new transatlantic world into motion were exports of crops and animals that originated in Europe but were raised on a massive scale in the Americas. As the growers of wheat on the North American prairies and Argentinean Pampas began to seek buyers worldwide, they threatened the livelihoods of peasant growers of wheat in Sicily. In 1870, the US was a major importer of lemons and citrus from Sicily and Naples and of nuts and fruits from Italy generally. The development of citrus, fruit and nut orchards in first California and then Florida forced peasants in Italy to consider new options. One of these options was migration.

When Italians sought work, they tended to travel along well-established commercial routes. Between 1870 and 1920, approximately one-third of Italy’s migrants went to North America; one-quarter to South America; and over 40 per cent to transalpine European destinations. Rates of return from all these destinations were significant, varying from as high as 80 per cent of those working in Europe to about 50 per cent of those in the Americas. The peasants and artisans of Genoa in the north and Sicily, Naples, Calabria and Basilicata in the south were far more likely to go to the Americas, and especially, after 1880, to depart for the United States. Fully three-quarters of the emigrants of Genoa opted for the US; in Sicily and Naples, the proportions of US-bound migrants were about the same.

But migration was not the only response. Agricultural innovation was another. Already in the mid 1880s, peasants and artisans on the northern coast of Sicily and around Naples (two areas that had long been heavily engaged in the cultivation and export of citrus) began to expand cultivation of tomatoes and to process them in new ways. Small factories produced dried cakes of tomato puree reduced to a thick, dark paste and tin cans of skinned, pulverized tomatoes lightly cooked into sauces. The scale of pasta production increased, with many factories in the immediate suburbs of Naples. Much of the wheat for these factories came from Sicily. The Italian government during this period also particularly promoted the expansion of olive production.

The cultivation, processing and export of tomatoes, the canning and export of olive oil and the transformation of Sicilian wheat into pasta for export skyrocketed as the Italians who had emigrated to Argentina, the US, Brazil and Canada created a mass market for them. Parma, in central Italy, soon emerged as an important centre of canning and pasta manufacture, especially for export to the UK and other countries, such as France, Belgium, Germany, Switzerland and Austria, where large numbers of Italian emigrants also worked. Migration and agricultural innovation developed an odd and changing symbiosis. Unable to compete successfully on world wheat markets, Sicily’s wheat-growers now sold to pasta manufacturers who produced a more expensive product for export. Unable to export tons of lemons and oranges to the US, peasants throughout Italy expanded production of tomatoes, again with an export market in view. But this time the ‘foreign’ market in both cases was formed by immigrant consumers who had better access to cash through wage-earning than did Italy’s peasants.

‘Americanising’ immigrant foods, 1920-45

By 1920, almost five million immigrants from Italy lived in the US alone. New York and Buenos Aires each claimed more Italian residents than any city on the Italian peninsula. In North and South America, Europe, Africa and Australia, Italian citizens living abroad formed a population about a quarter of the size of Italy’s residents. For most immigrants, daily life meant hard work and low wages but also, especially in the Americas, surprisingly bountiful dinner tables. The holiday foods of Italy – pasta, meat, cheese, sugar and coffee – became daily fare. To the classical Mediterranean trilogy of wine, wheaten bread and imported oil, immigrants added and made liberal use of imported canned tomato sauces and packages of pasta.

The Americanization of immigrant cuisine began in immigrant kitchens and restaurants. Short work and low pay required working-class immigrants to revert at times to Italy’s cucina povera but they and their children more often reported their pleasure and satisfaction with low food prices. New markets inspired culinary creativity. In the US, Chicago packing houses delivered tons of slaughtered, refrigerated and cheap beef to immigrant consumers. Whether to tempt consumers unaccustomed to aged beef or to disguise signs of the onset of decay, butchers ground much of this beef, which immigrant cooks then transformed into a dish of meatballs with tomato sauce on spaghetti. In Buenos Aires, immigrants pounded freshly slaughtered beef from the pampas to resemble the veal cutlets of Milan; smothering it with tomato sauce emerging from cans packed in Naples, they called the dish milanese alla napoletana.

The creativity of immigrant cooks also drew on the labour and homeland connections of immigrant food growers and retailers. Wherever they travelled, immigrants from Italy specialized in the raising, trading and import of foods. Among the importers, the Genoese had established precedence in the US already in the 1870s. They imported oil from their home region, pasta from Parma, and dried fruits and nuts, wines and liquors, and cheeses and dried meats. In many cities, including Baltimore and New Orleans, Sicilians’ initial dominance of the transatlantic citrus fruit trade gave them an early start toward developing a prominent place in the production and marketing of fruits from California, Florida and the Caribbean.

Most American cities housed significant populations of immigrant food producers. American-made pasta hung on racks in bakeries, groceries and kitchens throughout Little Italys. By the second decade of the twentieth century, immigrant entrepreneurs produced Italian-style pastas in American-style factories. Immigrant truck gardeners introduced crops familiar in Italy, carrying their produce to urban grocers and vendors. While eggplant and grapes rarely flourished in New York City, Toronto or even Buenos Aires, immigrants soon found their way to California with its Mediterranean climate. By 1900, transcontinental railroads hauled hundreds of gallons of the so-called California ‘dago red’ wine (which would later even be bottled in straw-wrapped Chianti-style bottles) to consumers in eastern cities. When the US prohibited the sale of alcoholic beverages after 1919, the same railroads delivered tons of California grapes to immigrants who revived home-based production of wine in their basements and bathtubs.

The numbers of immigrant consumers and restaurants were large enough to attract interest from natives. English-speakers in search of quick, cheap or exotic meals ventured quickly into immigrant restaurants. What they found there was not completely strange to them. Tomatoes were scarcely new to Anglo-Americans, having been eaten since the eighteenth century. Recipe books for English-speaking cooks had included recipes for macaroni with cheese or with tomatoes, ‘in the Italian style’, since the early nineteenth century. The industrialized canning of tomatoes in New Jersey and the Middle West had become big business already in the 1860s and throughout the late nineteenth century such canners tried to find ways to increase consumption. But immigrant consumers were initially suspicious of American cans which contained whole tomatoes of an unfamiliar variety.

By 1915, an investigator employed by the US Department of Commerce travelled to Italy to investigate the threat that Italy’s exports posed for American industry. Meanwhile, reformers working to Americanize immigrant populations railed at their consumption choices, and tried to convince them that expensive sausages, wines, oils, pastas and canned goods of Italy were extravagances that workers could not afford. They too urged them to find American substitutes for Italian products.

The advent of fascism in Italy, after Mussolini’s famous march on Rome in 1922, proved a boon both to American businessmen hoping to lure immigrant consumers and to reformers hoping to wean working-class households off their imported tastes. Fascist nationalism emphasized the importance of economic self-sufficiency; Mussolini’s ‘battle for grain’ aimed to ensure that Italy’s fields delivered calories first to Italy’s residents. By 1930, many nations around the world were responding to the great depression by hoisting tariff barriers ever higher and raising the prices of imported goods for immigrant consumers.

Under these conditions, immigrants and natives alike had still greater incentives to produce in the Americas the products that immigrants desired. Pasta production soared in the United States and Argentina. Both countries were also soon producing cheeses that approximated, if they did not exactly replicate, the aged, hard cheeses of Parma. California growers and packers began to cultivate and can the plum-shaped tomatoes of Naples. Immigrants such as Hector Boiardi in the US went a step farther by canning tomatoes and pasta together – a product that, as the ‘Chef Boyardee’ brand, attracted first an American corporate purchaser and then a major buyer in the form of the US army. With 15 million soldiers to feed during World War II, the American military may have introduced more potential consumers – at home and abroad – to Italian foods than had all the immigrant restaurateurs of Little Italys in American cities.

Pizza and red sauce: Italian or American?

No peasant in Italy had eaten spaghetti with meatballs or milanese alla napoletana prior to migration. Such dishes used ingredients that had originated in the Americas but by 1920 those ingredients were typically imported from Italy before being combined by immigrant cooks in America with the meats that had originated in Europe but were mass produced in the new world.

Pizza Hut did not exist in 1945 and at that time surprisingly few English- or Spanish-speaking Americans outside New York, Chicago or Buenos Aires had learned to love the tomato-enhanced ‘Italian’ flat bread. Nevertheless, the mass migrations from Italy to the Americas and the trade wars of the years of economic depression and nationalist warfare had already created the agricultural, industrial and commercial foundations for the ‘American’ exports of ‘Italian’ foods and dishes. Pizza Hut first sold ‘Italian’ pizza in the 1950s in a town (Wichita, Kansas) with few immigrant consumers. By the 1960s, Americans travelling as tourists to Italy not only learned to love ‘Italian’ pizza but to expect to find it wherever they travelled in that country, not only in Naples. Italians, too, began eating this emblematic ‘Italian’ food in Turin and Venice. Soon enough, Korean children, too, craved the ‘American’ dish that Pizza Hut sells around the world.

Despite this history of movement and exchange, people around the world continue to insist on fixing national labels to dishes such as pizza and spaghetti with red sauce. On what grounds do they make such choices? Sometimes they feature the place of the food’s production. Sometimes culinary traditions or supposedly distinctive regional flavourings are determinative. Sometimes the origins of the cook decides the label. Sometimes it is the location of kitchens or factories or of the diners and consumers of the food. The history of the humble pizza and the simple dish of spaghetti with meatballs encourages readers to consider a very large question indeed: why in a globalizing world of rapidly travelling people, goods and tastes do so many still insist on the fixity of the relation between the culinary and the national?

This article was first published in History in Focus.

Suggestions for further reading:

  • Jose Morilla Critz, Alan L. Olmstead and Paul W. Rhode, ‘”Horn of Plenty”: the globalization of Mediterranean horticulture and the economic development of southern Europe, 1880-1930′, Journal of Economic History, 59 (1999), 316-52.
  • Alfred W. Crosby, The Columbian Exchange: Biological and Cultural Consequences of 1492 (30th anniversary edn., Westport, Conn., 2003).
  • Hasia R. Diner, Hungering for America: Italian, Irish, and Jewish Foodways in the Age of Migration (Cambridge, Mass., 2001).
  • Donna R. Gabaccia, We are What We Eat: Ethnic Food and the Making of Americans (Cambridge, Mass., 1998).
  • Carol Helstosky, Garlic and Oil: Politics and Food in Italy (Oxford and New York, 2004).
  • Jeffrey Pilcher, Food in World History (New York, 2005).
  • Silvano Serventi and Francoise Sabban, Pasta: the Story of a Universal Food (New York, 2003).
  • Andrew F. Smith, The Tomato in America: Early History, Culture, and Cookery (Columbia, SC, 1994).

Lenten fare

Continuing the theme of abstinence, this blog post by Jonathan Blaney was first published on the IHR Digital Blog on 21 February 2012.

The history of Lent is complicated. It touches on two other complicated subjects: the history of early Christianity and calendrics. Anyone who would like an in-depth treatment of the latter, with a bit of the former thrown in, should turn to the second volume of Anthony Grafton’s erudite intellectual biography of Joseph Scaliger. Scaliger was the most famous intellectual in Europe in his day and he devoted much of his life to a now-forgotten subject: the reconstruction of ancient chronology. Pagan and Christian festivals were central to that study.

The early history of Lent is usefully summarised in the Catholic Encyclopedia. There is no evidence for early Christians keeping a Lenten fast. Tertullian, for example, doesn’t mention it in his tract on fasting. It may be that when the fast did begin it was 40 hours (to mark the period in which Jesus was thought to have been in the tomb) rather than 40 days.

The Oxford Companion to the Year explains that “By late antiquity the prevailing Western custom was to keep six weeks of fast (excepting Sundays)…the tithe of the year”. The fasting diet seems to have originally excluded meat and dairy, but this was gradually relaxed, especially if those wanting exemptions could pay: the Butter Tower at Rouen Cathedral is one example of how much money was raised from those who wanted to keep up their dairy intake.

In Elizabethan England the government was torn between deprecating fasting as a Catholic practice, and advocating it in an attempt to support the fisheries. Keith Thomas says in Religion and the Decline of Magic that it was forbidden to get married during Lent (although you could, as usual, purchase a special licence). Puritans attempted to get these rules repealed in parliament but failed and they “were still being enforced at the beginning of the eighteenth century”. Not only marriage was off limits:

The Lenten fast may have originally coincided with a shortage of food at that time of the year, but it acquired other less utilitarian connotations. It was regarded by many Catholic clergy as an improper time for marital intercourse, and the findings of modern demographers suggest that the Lent period in early modern Europe may have been marked by fewer conceptions than at other times of the year.

[Religion and the Decline of Magic (1997) p.620]

Protestants, Thomas goes on, campagined against much of the traditional Christian calendar. Books such as Joshua Stopford’s Pagano-Papismus (1675) located the festivals in pagan antecedents: Shrove Tuesday was merely Saturnalia in disguise. This may come as a surprise to those eating their pancakes today.