I recently found myself in the village of Eyam in the Derbyshire Dales, looking out over the views of Calver Peak and the River Derwent from the ‘Riley Graveyard’. An austere set of headstones marks graves dug some 350 years ago by Elizabeth Hancock, who we are led to believe took on the agonising task herself of transporting the bodies of her husband and six children from their home to this quiet resting place on the hillside.
My visit prompted me to reconsider the story of Eyam, which has been told to many a generation of schoolchildren in primary school history lessons. In actuality, it is more the product of folklore and fable than accurate historical account and, as the historian Patrick Wallis has pointed out, it is a story reinvented over the years to fit the zeitgeist, sometimes with a literary zeal more akin to Shakespearian tragedy.[1] But the basics however told are that of a village whose residents quarantined themselves to prevent the spread of bubonic plague in 1665 and over the course of 14 months sacrificed many of their lives to the deadly disease.
In a recent article about the 1665 plague outbreak in Eyam, the epidemiologist Xavier Didelot wrote of how modern science and statistical modelling can now reveal that the villagers’ sacrifice was not in vain and would have likely prevented the disease from spreading and potentially causing many more deaths. In short, the villagers did the right thing; but they did so without the knowledge of science we now have. Indeed, at the time, most believed that disease was spread by miasma, a sort of noxious air. This led Diderot to comment that the quarantine may have been right ‘albeit for the wrong reasons’.[2]

The assumption here is that the ‘right’ thing to do in such a public health crisis is to, as Boris Johnson so plainly put it in his Covid-19 daily briefings, ‘follow the science’ and thus protect the public and State institutions. However, history has revealed how science has often fallen short in providing decision-makers with clear direction on the control of disease, and often what the public consider to be ‘right’ is guided by quotidian ethics rather than epidemiological models.
Modern science has a problem in determining the right course of action in a crisis because crises generate heightened levels of empirical and temporal uncertainty, and responses must be implemented within complex social systems going through intense and unpredictable changes. Moreover, to make the claim that a decision is ‘right’ is to make a moral claim, not merely a scientific one. Empirical calculations of rightness tend to lead to utilitarian arguments, coldly weighing up measurable costs and benefits, and this approach is more often out of kilter with the everyday values of the public, who shudder at the idea that one’s lives and preferences are mere numbers on a spreadsheet.
People are captivated by the story of Eyam, not because of its empirical weight as an example of decision-making that saved the most lives or maximised utility, but because it shows normal people guided by virtues such as solidarity, courage, honesty and patience, putting aside their individual self-interest to fight collectively for a common good. These were similar to the virtues many of us admired during the Covid-19 pandemic when we praised those who abided by the lockdown rules and cheered frontline workers, while railing against those in the political elite who flouted their own rules with lockdown parties in Number 10, a boozy campaign event in Durham, and of course Dominic Cummings’ needless excursion to Barnard Castle.
In this article, I wish to discuss several historical examples in order to challenge the techno-scientific approach to tackling epidemics. My aim here is not to suggest that modern science and medicine is unhelpful or unnecessary for good policymaking in health crises. Indeed, it is easy to see how ignoring scientific evidence and acting purely on the basis of religious, ritual and cultural practice or political ideology often leads to unjustifiable and needless harm. Whole communities in Africa can fall victim to deadly viruses like Ebola if left to continue their usual burial practices, involving close contact with infected corpses and each other.[3] Let’s also not forget the many children in Texas who recently suffered – and two of whom died – from measles, an easily preventable disease, because of politics, misinformation and pseudo-science curtailing the uptake of the MMR vaccine.[4]
But it is also plainly wrong to treat science like a silver bullet or a miracle cure. Where governments have relied too heavily on scientific and technical advice during epidemics, they have often made choices that people feel are unjust. Governments have attempted to resolve this by changing their normative approach to scientific uncertainty, but this overlooks the central issue of the relationship between science and the State. Ultimately, I argue that we have become overreliant on techno-scientific thinking at the expense of moral and political debate, and this is bad for how we respond to epidemiological crises and bad for democratic society in general.
Mad cows and revelations
Around the time that I was first learning about the story of Eyam in school, a very different sort of health crisis hit the UK. For a decade or more, since the mid-1980s, the British government had known that a large number of cattle on British farms were infected with a fatal neurodegenerative disease called Bovine Spongiform Encephalopathy (BSE), known colloquially as ‘Mad Cow Disease’. The government set up a group of scientific experts to advise it on the potential risks posed to humans by BSE-infected meat, but concrete evidence was very limited and the risks highly uncertain. The government adopted a ‘wait-and-see’ approach, relying on their expert advisers to warn them if they needed to change policy, rather than exploring precautionary options in light of the uncertainty about the risk.
By 1990 the scientific consensus had changed from one where it was believed that BSE could not transfer between species, to one where it became obvious that it could when a domestic cat caught BSE from eating infected meat. Despite this, the Conservative government’s Environment Secretary, John Gummer, continued to champion the safety of British beef by attempting – unsuccessfully – to feed a beefburger to his young daughter on live television. All the while cases of BSE in cows continued to rise, peaking at 100,000 cases in 1992–93, but the position remained that there was insufficient evidence of any risk to human health.[5]
It wasn’t until 1996 that the government finally had to concede – in the most alarming of ways – that there was indeed a risk to human health from BSE, when it was announced that there was a link between several young people dying from a mysterious neurodegenerative disease and their consumption of beef from BSE-infected cows.
Scientists debated and disagreed over just how many people could be at risk. Just how much infected meat had been sold, and would everyone who ate it die? A scientific paper at the time disturbingly admitted that ‘there is no way of knowing how many people will die’.[6] I remember it clearly, as a boy worried about eating the Sunday roast, thinking I might go mad like the cows. The uncertainty, coupled with a sensationalist media and a growing lack of trust in the government and its advisers left the British public in a state of fear, anger and disgust. Europe placed bans on the sale of British beef lasting a decade; the United States slaughtered cattle imported from the UK.
A total of 178 people would die from the human form of BSE, known as variant Creutzfeldt-Jakob disease (vCJD), a truly horrific disease; there hasn’t been a case since 2016. Not the doomsday scenario some were fearful of, but a human tragedy nonetheless, and one that could have been largely avoided had the government taken a more precautionary approach in the absence of clear scientific evidence.

The public inquiry into the BSE crisis was critical of the government’s overreliance on scientific experts to guide – some would argue to decide on – policymaking. But, as the inquiry rightly argued, ‘decision-making is not a purely scientific process’. [7] The BSE outbreak revealed some of the limits of a techno-scientific approach to crisis. When science is confronted with uncertainty, it cannot be relied on to make decisions we might consider just. Most feel there was no justice for the 178 who died of vCJD when it was clear that many, if not all, of those deaths could have been avoided by imposing more stringent food standard measures to prevent contamination of the food supply.
Today, the risk to the public from BSE is extremely low. This was done by more rigorously ensuring all parts of a cow that could contain the disease are removed and destroyed after slaughter, by regularly testing live cattle, and by ending the cannibalistic practice of feeding bits of cow and other mammals to themselves. But during the outbreak, experts vied with the question of more extreme measures to irradicate the disease. One scientific adviser stated at the time,
if one set out to eliminate any potential theoretical risk from BSE then it would be necessary to destroy the entire national herd of cattle, however, various control measures could reduce any risk to a minimal level. Ultimately a decision on whether a zero or minimal risk was acceptable was a political one.[8]
The fact that politicians were looking to scientists to make this decision for them suggests a policymaking system out of kilter with the values of democracy. Around the time the BSE inquiry concluded, the House of Lords Science and Technology Committee warned,
Some issues currently treated by decision-makers as scientific issues in fact involve many other factors besides science. Framing the problem wrongly by excluding moral, social, ethical and other concerns invites hostility.[9]
Indeed, the public were hostile to the way the government handled the crisis. Those elected to make political decisions and be accountable for them ignored their primary duty and treated public concerns with contempt for years. From a purely statistical standpoint, one may argue that the overall risk of contracting vCJD turned out to be low, and the fear response from the public an overreaction. From a moral perspective, however, the public response to BSE would be better characterised as ‘wrong without having been wrongheaded’.[10]
The smell of the pyres
About a year after the announcement about the link between BSE and vCJD, New Labour were swept to power in the 1997 general election. They were in government when the Phillips Inquiry offered its damning criticism of the previous Conservative government’s handling of the BSE crisis. The Labour government responded positively to the inquiry, promising to implement its recommendations, and acknowledging crucially that ministers had ‘failed to recognise that the proper role of advisory committees is to advise on science, not to make decisions for the Government on issues of policy and implementation’.[11]
Labour didn’t get long before their modified approach to animal disease outbreaks faced a tough test. Less than a fortnight after Labour published its interim response to the Phillips Inquiry, a routine inspection at an abattoir in Essex uncovered cases of Foot-and-Mouth Disease (FMD) in pigs.
FMD is a virus that primarily affects animals with cloven hooves, such as pigs, sheep and cows. Unlike BSE, there was much more knowledge about the disease’s risk to humans. Cases of FMD in humans are extremely rare, the symptoms usually mild, and the risk to human health from the food chain negligible.[12] The scientific uncertainty was more in the threat it posed to livestock and the economy. FMD spreads rapidly among farm animals, particularly between pigs; it is an airborne infection between cloven-hooved mammals and can travel in the wind for miles. It was not clear just how many farms might already be affected by the outbreak nor how quickly it might then spread from farm to farm.

The government once again looked to scientific advice for help in determining the best response. Agriculture minister, Nick Brown, was meant to be reassuring when he declared the government’s decisions to be ‘guided by the best scientific advice’. But, unlike with the previous government, there was a shift away from broader veterinary and health science advice towards epidemiological modelling using statistics and computer simulations to inform decision-making. The well-known epidemiologist Roy Anderson had been brought in to lead one of four teams working on modelling the spread of the disease. It was Anderson’s teams’ model in particular which ultimately drove policy. Their statistical evidence crowded out other scientific insights and ultimately led the government to take extreme measures to control the outbreak.
The modelling pointed government to the use of a contiguous cull of animals to stop the spread of the virus. The policy meant that farms within a certain radius of another farm with cases of FMD would be required to cull their livestock regardless of whether any were showing signs of the virus. The policy was known as the 24-hour/48-hour cull, as the requirements were for the animals on infected farms to be killed within 24 hours and animals on neighbouring farms to be killed within 48 hours.
What followed was a mass slaughter of around 6.5 million sheep, cattle and pigs, the majority of which were, at the time, perfectly healthy, at an estimated cost of £8 billion to the economy.[13] Animals were culled in almost 11,000 farms, over 8,500 of which were farms where the cull was pre-emptive and no infections had been observed or suspected.[14] If you lived in the worst affected areas, you would have faced the choking stench of the huge pyres burning through the night with hundreds of carcasses piled up. I remember the awful images of the pyres on the news and the interviews with farmers, distraught, devastated. In the words of one young farmer who watched her family’s herd slaughtered, ‘It was like a black hole… Every time you shut your eyes you just see dead animals, it wasn’t easy.’[15]

The pre-emptive slaughter of animals was part of a precautionary approach by government to the crisis, based largely on fairly crude statistical modelling. It was different to the ‘wait-and-see’ approach adopted for BSE. In contrast, the government chose to act quickly and decisively in the face of scientific uncertainty to avoid a worst-case scenario, rather than gamble on the outcome being more favourable than the modelling suggested. The precautionary approach was also adopted at a local level, with many authorities choosing to close public footpaths and place limits on movement in rural areas, even in some counties without any cases of FMD.
While the optimistic ‘wait-and-see’ approach in the BSE crisis led to the potentially avoidable deaths of 178, the contrasting precautionary approach to FMD likely led to the unnecessary slaughter of at least a million animals and the complete decimation of many farming communities. New Labour learned the wrong lesson from the BSE crisis. The delay in acting to minimise the risks of BSE was not the problem at the core of the government approach, but rather a symptom of overreliance on techno-scientific thinking. Labour not only continued to rely heavily on techno-scientific thinking in responding to FMD but narrowed the types of scientific evidence they relied on even further, choosing statistical models over deep knowledge and insight of diseases and their effects.
Alex Donaldson, a veterinary scientist on the hastily assembled FMD Science Group, described how the advisory group was more like a ‘modelling sub-committee’ and that the actual experts on the FMD virus were much of the time ‘spectators to discussions between the modellers’.[16] He and other scientists and politicians had spoken out against the decision to perform a pre-emptive, contiguous cull based on a crude and inaccurate model. Caroline Lucas, who was at the time an MEP and vice-president of the European Parliament’s foot-and-mouth committee, criticised the actions of ‘a government basically following a computer model which was completely flawed’.[17] Donaldson and others, including the government’s Chief Scientific Adviser, had suggested an alternative strategy of mass vaccination in the early stages of the outbreak. Labour’s decision to ignore this option and perform a contiguous cull appears to have been down to opposition from the food trade industry and the National Farmers’ Union, who believed that vaccination would devalue animals or make them more difficult to export. Had Labour taken in a wider range of evidence and views, they would have been in a much better position to assess the validity of the opposition and allay fears about vaccination.[18]
Evidently, the reviews and inquiries that took place after the outbreak highlighted the government’s failure to take wider consideration of the wider human cost of their policies, not just economic, but social, cultural and psychological. The FMD outbreak was presented as a lot of numbers in spreadsheets and not as a complex social crisis taking place within communities that were built entirely around farming. Lives and livelihoods were devastated, families and communities ripped apart, the effects on mental health and wellbeing deep and longstanding. Once again, techno-science failed to consider the moral dimension to the crisis. Questions of justice arise in decisions that ultimately affect people and places, but they were cast aside and the human dimension of the crisis were overlooked and not debated, and this made the FMD outbreak far more devastating than it needed to be.
Model fever
Labour continued with their precautionary approach, guided by epidemiological models, in response to the Swine Flu pandemic of 2009–10. The new strain of H1N1 influenza spread rapidly around the world in 2009 and little was known about it when it arrived in the UK. The UK government’s framework for responding to pandemic flu had been published two years earlier and was predicated largely on the modelling of a different strain of influenza, H5N1 Avian Flu. These models suggested a reasonable worst-case scenario (RWCS) with an infection rate of 50% and a 2.5% fatality rate, which would suggest up to 750,000 excess deaths. In the early stages of the Swine Flu pandemic, the government commissioned new modelling, but there was very little knowledge of how infectious or virulent the strain was. The scientists dropped their estimate down to 65,000 deaths from the 750,000 for Avian Flu, with a case rate of 30% of the population, but there was huge uncertainty in the figures and the figures were reduced significantly as new data was collected over the course of the pandemic.
Based on the planning and modelling for an Avian Flu pandemic and the early (and very sketchy) modelling of Swine Flu, the government put in motion another aggressive response, hoping to avoid the worst case of tens of thousands of deaths. This included the stockpiling of vaccines to prepare for a mass vaccination programme and providing antiviral drugs, not only to those infected but also to anyone in close contact with someone infected.
I caught Swine Flu. It wasn’t nice at all. I was struck down with one of the worst fevers I’ve ever had, waking up delirious in the night, the bedsheets soaked through with sweat. But, apart from a lingering cough, I got over it quickly; most people did. In fact, Swine Flu turned out to be pretty mild and it burned itself out fairly rapidly. Around 800,000 people are estimated to have caught the virus of which 457 people died.[19] These numbers were not just miles better than the reasonable worst-case scenario, but even better than the best-case scenario given by the original models (where a minimum 3,100 fatalities were expected).[20]
In the wake of the Swine Flu pandemic, opposition politicians, the media, and the wider public started to question the government’s response which many felt had been alarmist. The country was in a deep recession during 2009 following the Global Financial Crisis and the public finances were in a difficult state. Many considered the response to have been extremely wasteful of taxpayers’ money and put the country in an unnecessary state of panic. However, an independent review found the £1.2bn response to have been ‘effective and proportionate’, much to the surprise of many.[21]
Of course, the question of whether the response was ‘effective and proportionate’ is a normative one. It requires us to decide whether we think that the right response to scientific uncertainty is to adopt a pessimistic view of the evidence and do whatever it takes to avoid a catastrophic scenario. Given pandemics are hugely unpredictable, especially at the start, government has to make a moral and political choice whether to respond optimistically to uncertainty and hope that things turn out better than the worst-case scenario or be more pessimistic and work proactively to avoid the worst-case scenario. As it happens, most people tend to be ambiguity averse, which means we tend to adopt a pessimistic approach and will pay additional – sometime very high – costs to avoid a worst-case scenario in situations where it is hard to predict the level and likelihood of threat.
We may well consider it morally reasonable, even desirable, for a government to act in a precautionary way to protect its citizens against potential catastrophe, even if this means an inefficient use of resources should the threat turn out to be less severe. But ambiguity aversion doesn’t mean a government should respond impulsively to a single piece of evidence. If the government had considered a wider range of evidence, rather than relying almost solely on early models, it may have been able to respond pre-emptively but in a more efficient way.
Critics of Labour’s response focus on the government’s persistent obsession with following models, just like with the FMD outbreak. This form of aggressive and reductive techno-scientific policymaking was flawed, because it treats the modelling as if it was a robust representation of real possibilities. Policymakers like models because they give them a clear set of numbers to work with, allowing a sort of utilitarian cost-benefit analysis of the situation, weighing up the efficacy of different responses. But the model itself cannot be the single determinant of government policy.
Indeed, Labour failed to take on the findings of one of their own reviews into the use of models after the FMD outbreak. The 2003 report for Defra concluded that,
The fact that a stochastic model predicts a range of possible ‘futures’, reflecting the unpredictability of real life, means that it must be used with care as a decision support tool. Decision-makers must not rely on the model to make a decision for them but be prepared to use it as part of a process in which other factors, such as the ‘riskiness’ of a policy, are weighed.[22]
Failure to take account of a wider range of evidence alongside the modelling was also criticised in the independent review of the Swine Flu response,
Ministers and officials were keen to understand the likely outcomes as early as possible and this led to unrealistic expectations of modelling, which could not be reliable in the early phases when there was insufficient data.[23]
The review also suggested that more space should have been made to deliberate over whether ‘to assume the worst-case scenario and resource the response accordingly’ or, alternatively, to ‘take a view on the most likely outcome, while monitoring events closely and changing tack as necessary’.[24] This summarises nicely the two approaches to scientific uncertainty: the optimistic ‘wait-and-see’ approach and the more pessimistic and precautionary approach. Determining which approach to scientific uncertainty should be adopted requires consideration of a much broader and interdisciplinary range of evidence than those used in any of the examples discussed so far, and it is ultimately a moral and political choice.
Herd immunity
The form of techno-scientific governance that seemed to proliferate during the 1990s and 2000s is now being challenged by the rise of anti-establishment, populist forms of political discourse. Part of the appeal of populists like Donald Trump and Boris Johnson is that they can appear more attuned to the everyday concerns of the public and capitalise on the declining trust in liberal technocratic government.
The Covid-19 pandemic offered a chance for these more populist governments to explore an alternative type of crisis response. Interestingly, despite Boris Johnson’s consistent criticism of the ‘experts’ and ‘technocrats’ during the Brexit campaign, science became a central part of his political agenda, tying science and research to arguments about national sovereignty and strength (‘restoring Britain’s place as a science superpower’) and in linking scientific and technological innovation to simple solutions to the country’s economic and social problems. It was a different relationship with science, a reframing of the official science of the State, to once again bring it into service to the government’s particular political agenda. Indeed, the ‘following the science’ mantra continued to be adopted and used as a way to depoliticise the government’s response.[25]
How well the Johnson government actually followed the science depends a lot on what one defines as ‘science’. While there was still a tendency to treat epidemiological models as real predictions and ignore the levels of uncertainty in their underlying assumptions, this was supplemented with a rise in the use of behavioural science in policymaking, and government looked to behavioural experts in justifying things like mask wearing and social distancing, the so-called ‘non-pharmaceutical interventions’, that ate into people’s personal freedoms.
The Johnson government’s approach to science can also be judged against one’s perspective on the normative question of whether it is better to take a precautionary approach to risk uncertainty or to be more optimistic and adapt as you go along. It is widely accepted that the UK government did not adopt a more precautionary approach to Covid-19, diverging from the rapid and aggressive response seen in the Swine Flu epidemic.[26] The Johnson government was reluctant to implement aggressive measures to control the virus in the early stages. Their early use of science in public communications was to try to play down the threat, including the backfired attempt by Government Chief Scientific Adviser, Sir Patrick Vallance’s use of the term ‘herd immunity’ and its relationship to a utilitarian-sounding strategy of ‘flattening the peak’ of infections to allow circulation of the virus without it overwhelming the health service. It was only as new modelling began to suggest that Covid-19 would spread more quickly and cause significantly more harm that tighter measures were imposed, resulting in a full ‘lockdown’ of the country.

Whether or not the government should have taken a more precautionary approach is – I stress again – a moral matter to be determined by consideration and debate over a wide range of factors, not least the values we hold dear in society. It certainly doesn’t seem like the government took such a wide and deliberative approach before choosing this path. The argument that the government hid behind the notion of ‘following the science’ in order to avoid moral and political responsibility certainly seems plausible and may be a line of investigation for the UK Covid-19 Inquiry.
Another country who took the adaptive, wait-and-see approach with Covid-19 was Sweden. Their government displayed perhaps the most extreme form of technocratic thinking and narrowing of the official science advice by giving almost total responsibility for both the scientific evidence and the subsequent response strategy it informed to just a handful of experts in the country’s Public Health Agency. It led some to remark that, during the Covid-19 pandemic, ‘in effect, the democratic institutions ceased to function’.[27]
The Public Health Agency of Sweden had, for some time, adopted a ‘broad conceptualization of health as also incorporating quality of life’ or wellbeing.[28] The welfarist tendency of the Agency to include in their thinking matters which affect wellbeing led them to adopt a pandemic preparedness plan that sought to balance the minimisation of ‘morbidity and mortality’ against minimising ‘other negative consequences for individuals and society’.[29] This led the Agency and its State Epidemiologist, Anders Tegnell, to advocate for a strategy that minimised disruption to economically active Swedes by not imposing any lockdown restrictions and avoided loss of education for children by keeping schools open as much as possible. The strategy was subsequently implemented without political debate or democratic deliberation. Its strongly utilitarian underlying assumptions were left unchallenged, despite a longstanding position in other spheres of public policy in Sweden to take a precautionary and rule-based approach: for instance, their ‘Vision Zero’ strategy, adopted in areas such as road traffic safety and suicide prevention, starts from the premise that ‘life and health can never be exchanged for other benefits within the society’.[30]
Alongside highlighting the Swedish government’s overreliance on a single source of advice – the Public Health Agency – and criticising the slow and badly coordinated attempts to protect the elderly and vulnerable groups as ‘more of a hope than a plan of action’, one of the main findings of the Swedish Coronavirus Commission returned to the question of whether the Swedish response should have taken an adaptive wait-and-see approach or a more precautionary one.[31] They concluded that,
[t]he precautionary principle should be virtually self-evident in responding to an imminent threat. Whoever is responsible for a given activity thus not only has cause, but should also have a duty, to apply this principle when faced with a far-reaching threat to society.[32]
The oscillations between adaptive and precautionary approaches to epidemics seem to continue. Despite the issues that the precautionary approach made for the responses to FMD and Swine Flu, switching back to an adaptive approach to scientific uncertainty hasn’t fared well in many minds when we look back on the handling of Covid-19. It seems likely that the UK’s own public inquiry will say something to this effect when it finally concludes.
There must be more to the issue of scientific uncertainty then. If successive governments have tried different strategies to resolve the issue that science must be deployed in epidemiological crises that cannot be easily understood, predicted or modelled, and they continue to make costly mistakes that lead to injustices and harms, then perhaps the problem is not scientific uncertainty, but the way in which governments use science.
Politicising science, depoliticising politics
We have seen through responses to different epidemics how the relationship between science and the State has changed over time. The reaction to heightened uncertainty during epidemiological crises has shifted, and the type of scientific expertise coveted and employed in decision-making has also evolved, particularly with the development of epidemiological modelling. But, in each case, despite the changes and attempts to learn from and avoid previous mistakes, governments continue to respond to epidemiological crises in ways that are ineffective, unjust, or sometimes both.

Something which all of these historical examples show is that there is no singular community of ‘science’ for policymakers to seek out for advice. In a broad view of science, we would consider a huge range of expertise from across many disciplines within a diverse research ecosystem full of ambiguity, creativity and difference. Yet when governments choose to ‘follow’ or to be led by ‘the science’, they are the central drivers of a process of delineation which determines what counts as science in the government’s eyes. Policymakers ultimately are the ones who determine what the official science is, as opposed to the broad, diverse scientific community in general.
This process is captured in the distinction that Gilles Deleuze and Felix Guattari make between ‘royal, or State science’ and ‘nomad science’.[33] While nomad science is an open, creative and exploratory endeavour, unbound by the authority of institutions, royal science is aligned with the interests of the State and it is crafted to support the agenda of the political class. For Deleuze and Guattari, it is the ‘concern of a man of the State, or one who sides with the State, to maintain a legislative and constituent primacy for royal science’.[34]
According to this view, the State takes ideas and insights from ‘nomad’ science and regularises them into a carefully curated ‘royal’ science, with rigid and stable institutions, norms and practices. So, when an expert advisory group is created or a scientific adviser appointed, Deleuze and Guattari do not see these roles as granting scientific experts any real power or influence of their own within the institutions of the State. Instead, they are there to ‘reproduce or implement’ their intellectual, creative power with ‘an autonomy that is only imagined’.[35]
In this reading, the historical examples take on a slightly different interpretation. Take the BSE crisis, for instance. The Phillips Inquiry concluded that ministers and officials handed too much power and responsibility for policy over to the expert advisory group they set up. Similarly, with FMD and Swine Flu, the modellers were seemingly granted a good deal of responsibility for determining the best course of action. What Deleuze and Guattari’s theory suggests is that the problem isn’t actually as simple as government handing over their power and responsibility to experts. Instead, we can see these examples as a pattern within modern liberal-democratic government to advocate a technocratic and instrumentalist form of governance, in which scientific experts are utilised in ways which help to reproduce this governance.
What appears most intriguing about this relationship between science and State is that, while it requires a political act to construct the official discourse of science, politicians tend to employ the science they have politically constructed to make the space of policymaking during these sorts of health crises appear depoliticised. They treat the crisis as a narrow techno-scientific problem, the solutions of which are an objective, rational, and discrete product of techno-scientific expertise. Politicians didn’t unwittingly hand over power to the experts they appointed; they created the roles for experts precisely because it helped to reproduce a style of governing desired, where moral and political responsibility could be subverted by an appeal to follow only what the experts conclude is objective and rational.
While this reading goes some way in us understanding the way governments can exert power over the domain of science and choose what (and who) to include and exclude, I would suggest that it is probably ignoring some degree of agency and influence that scientific advisers can obtain and use, in the same way as officials within government administration are known to influence government agendas, characterised by the Sir Humphrey character in Yes Minister. This may explain Deleuze and Guattari’s remark that the process of constructing a royal science ‘does not shield the State from more trouble, this time with the body of intellectuals it itself engendered’, though the nature of this is not explored.[36] The case of Sweden during the Covid-19 pandemic is an example of experts exerting their own political agency, though this should still be understood in the context of a longer development of public health discourse in Sweden that became increasingly utilitarian and out of kilter with approaches in other areas of Swedish public policy.
Conclusion
The problem in both populist and technocratic government is that what counts as ‘science’ is being politically contrived, cherry-picked to match a particular political agenda or as a way to defer responsibility, and this process is being used to obfuscate the moral and political nature of the difficult choices that governments must make in crises.
Science has been applied by governments in ways that more akin to blind faith and are antithetical to the open, creative, exploratory and refutable characteristics of the scientific community as a whole. Moreover, this narrow and cynical approach to using science and evidence to inform policy is showing signs of undermining public trust in our democratic institutions and giving ammunition to political forces that seek to undermine them.
The next epidemic will arrive, perhaps sooner rather than later, and it may prove far more deadly. Scientific evidence should play a central role in how we prepare for and tackle Pandemic X, but it must be broad and multidisciplinary and it cannot be all that we rely on to make decisions. Cara and Michael Reed’s recent book Enough of Experts provides a detailed analysis of the challenges facing experts in policymaking today and concludes that we need to move to a more ‘reflexive/deliberative model’ of scientific expertise that is ‘more open, inclusive and collaborative’ because this is ‘better suited to the endemic uncertainties and instabilities of political, economic and cultural life in the twenty-first century.’[37] This would allow for a broader and more democratic decision-making process that judges options from a range of stakeholder perspectives and by a range of standards and values that are derived in open dialogue between experts, officials, politicians, and wider publics.
What we must never ignore, and always endeavour to highlight, in the face of epidemiological (and other) threats, is the moral nature of the decisions that are made. The people of Eyam acted in a way that many would consider morally right, but they lacked the scientific understanding of the threat they were facing. Today, we have the benefit of science to help us understand disease and how it affects humans and other species, but it is still limited by the uncertainty and complexity of epidemiological threats. It is in this space of uncertainty where our moral faculties and democratic politics are required to determine the right course of action. Science can never answer moral questions, but it can help guide us better in the practical application of our moral values and sentiments.
Bibliography
BBC. ‘Ministers face foot-and-mouth censure’. 22 July 2002. http://news.bbc.co.uk/1/hi/uk_politics/2142734.stm.
BBC. ‘Mad cow disease: What is BSE?’. 18 October 2018. https://www.bbc.co.uk/news/uk-45906585.
BMA. ‘The public health response by UK governments to COVID-19’. BMA Covid Review 4 (2022).
Bowcott, O., and D. Batty. ‘Swine flu: 65,000 deaths is UK’s worst case scenario’. The Guardian, 16 July 2009. https://www.theguardian.com/uk/2009/jul/16/swine-flu-cases-rise-britain.
Brusselaers, N., et al.‘Evaluation of science advice during the COVID-19 pandemic in Sweden’. Humanities and Social Sciences Communications 9, 91 (2022): https://doi.org/10.1057/s41599-022-01097-5.
Craens, J., K. Frenken, and T. Meelen. ‘Mission-oriented innovation policy: The case of the Swedish ‘Vision Zero’ approach to traffic safety’. Papers in Evolutionary Economic Geography, 21.40 (2021). http://econ.geo.uu.nl/peeg/peeg2140.pdf.
Deleuze, G., and F. Guattari. Nomadology: the War Machine. Seattle: Wormwood Distribution, 2010.
Diderot, X. ‘Heroic sacrifice or tragic mistake? Revisiting the Eyam plague, 350 years on’. Significance 13, 5 (2016): 20–25.
Donaldson, A. ‘Lessons to be learned from foot-and-mouth outbreaks’. Vet Times, 21 March 2016. https://www.vettimes.com/news/vets/livestock/lessons-to-be-learned-from-foot-and-mouth-outbreaks.
Hansard. Phillips Inquiry. House of Commons, 15 February 2001.
Health Protection Agency. ‘Weekly pandemic flu media update’, 10 December 2009. https://web.archive.org/web/20100113215351/http://www.hpa.org.uk/webw/HPAweb&HPAwebStandard/HPAweb_C/1259152450217?p=1231252394302.
Hine, D. The 2009 Influenza Pandemic: An Independent Review of the UK Response to the 2009 Influenza Pandemic. London: UK Cabinet Office, 2010.
House of Lords Science and Technology Committee. ‘House of Lords – Science and Technology – Third Report’ (2000).
Kettell, S., and P. Kerr. ‘”Guided by the science”: (de)politicising the UK government’s response to the coronavirus crisis’. The British Journal of Politics and International Relations 24, 1 (2022): 11–30.
Kuppalli, K., and T. M. Perl. ‘Measles in Texas: waning vaccination and a stark warning for public health’. The Lancet Infectious Diseases 25, 5 (2025): 485–87.
Lee-Kwan, S. H., N. DeLuca, R. Bunnell, H. B. Clayton, A. S. Turay, and Y. Mansaray. ‘Facilitators and Barriers to Community Acceptance of Safe, Dignified Medical Burials in the Context of an Ebola Epidemic, Sierra Leone, 2014’. Journal of Health Communication 22, sup1 (2017): 24–30.
National Audit Office. ‘The 2001 Outbreak of Foot and Mouth Disease’, HC 939. London: TSO, 2002.
Official Journal of the European Union. CE 31, Official Journal C 31 E/137, 5 February 2004.
Olofsson, T., et al. ‘The making of a Swedish strategy: How organizational culture shaped the Public Health Agency’s pandemic response’. SSM – Qualitative Research in Health 13, 2 (2022): doi.org/10.1016/j.ssmqr.2022.100082.
Prempeh, H., R. Smith, and B. Muller. ‘Foot and mouth disease: the human consequences’. British Medical Journal 322, 7286 (2001): 565–66.
Reed, C., and M. Reed. Enough of experts: Expert authority in crisis. Berlin: De Gruyter, 2023.
Ridley, R. M., and H. F. Baker. ‘Big decisions based on small numbers: Lessons from BSE’. Veterinary Quarterly 21, 3, (1999): 86–92.
Sandman, P. ‘The Mad Cow Crisis: Health and the Public Good’. Review of The Mad Cow Crisis: Health and the Public Good, edited by Scott C. Ratzan. Journal of Health Psychology 5, 1 (2000): 117–18.
Stickler, J. W. ‘Further observations upon foot-and-mouth disease in its relation to human scarlatina as a prophylactic’. JAMA XV, 7, (1890): 237–43.
Taylor, N. ‘Review of the use of models in informing disease control policy development and adjustment. A report for Defra’. Veterinary Epidemiology and Economics Research Unit. University of Reading, 2003.
The BSE Inquiry. Final Report. London: HMSO, 2000.
The Coronavirus Commission [Coronakommissionen]. Summary in English. SOU 2022:10 (2022).
The Public Health Agency of Sweden. Folkhälsomyndigheten Pandemiberedskap. Hur vi förbereder oss – ett kunskapsunderlag (2019).
Tildesley, M. J., P. R. Bessell, M. J. Keeling, and M. E. J Woolhouse. ‘The role of pre-emptive culling in the control of foot-and-mouth disease’. Proceedings of the Royal Society B-Biological Sciences 276, 1671 (2009): https://doi.org/10.1098/rspb.2009.0427.
Wallis, P. ‘A Dreadful Heritage: Interpreting Epidemic Disease at Eyam,1666–2000’. History Workshop Journal 61 (2006): 31–56.
Walter, L. ‘Childhood memories of 2001 foot and mouth crisis’. BBC, 16 February 2011. https://www.bbc.co.uk/news/uk-england-devon-12463515.
[1] P. Wallis, ‘A Dreadful Heritage: Interpreting Epidemic Disease at Eyam,1666–2000’, History Workshop Journal, 61 (2006): 31–56.
[2] X. Diderot, ‘Heroic sacrifice or tragic mistake? Revisiting the Eyam plague, 350 years on’, Significance 13, 5 (2016): 20–25.
[3] S. H. Lee-Kwan, N. DeLuca, R. Bunnell, H. B. Clayton, A. S. Turay, and Y. Mansaray, ‘Facilitators and Barriers to Community Acceptance of Safe, Dignified Medical Burials in the Context of an Ebola Epidemic, Sierra Leone, 2014’, Journal of Health Communication 22, sup1 (2017): 24–30.
[4] K. Kuppalli and T. M. Perl, ‘Measles in Texas: waning vaccination and a stark warning for public health’, The Lancet Infectious Diseases 25, 5 (2025): 485–87.
[5] BBC, ‘Mad cow disease: What is BSE?’, 18 October 2018, https://www.bbc.co.uk/news/uk-45906585.
[6] R. M. Ridley and H. F. Baker, ‘Big decisions based on small numbers: Lessons from BSE’, Veterinary Quarterly 21, 3 (1999): 86–92.
[7] The BSE Inquiry, BSE Inquiry Final Report (London: HMSO, 2000),1.1223.
[8] BSE Inquiry, Final Report, 7.265.
[9] House of Lords Science and Technology Committee, House of Lords – Science and Technology – Third Report (2000), §1.3
[10] P. Sandman, ‘The Mad Cow Crisis: Health and the Public Good’, review of The Mad Cow Crisis: Health and the Public Good, edited by Scott C. Ratzan, Journal of Health Psychology 5, 1 (2000): 117–18.
[11] Hansard, Phillips Inquiry, House of Commons, 15 February 2001, Vol. 363, Col. 486.
[12] H. Prempeh, R. Smith and B. Muller, ‘Foot and mouth disease: the human consequences’, British Medical Journal 322, 7286, (2001): 565–66. Given the readership’s interest in history, it is worth mentioning here that one of the worst reported human outbreaks of FMD occurred in Dover in 1884, when it was reported that 205 people contracted the disease from drinking raw, unpasteurised milk from infected cows. While most only suffered from a ‘sore throat’ and fully recovered, two young children were said to have died in the outbreak. See J. W. Stickler, ‘Further observations upon foot-and-mouth disease in its relation to human scarlatina as a prophylactic’, JAMA XV, 7 (1890): 237–43.
[13] National Audit Office, The 2001 Outbreak of Foot and Mouth Disease, HC 939 (London: TSO, 2002).
[14] M. J. Tildesley, P. R. Bessell, M. J. Keeling and M. E. J. Woolhouse, ‘The role of pre-emptive culling in the control of foot-and-mouth disease’, Proceedings of the Royal Society B-Biological Sciences, 276, 1671 (2009): 3239–48.
[15] L. Walter, ‘Childhood memories of 2001 foot and mouth crisis’, BBC, 16 February 2011, https://www.bbc.co.uk/news/uk-england-devon-12463515.
[16] A. Donaldson, ‘Lessons to be learned from foot-and-mouth outbreaks’, Vet Times, 21 March 2016, https://www.vettimes.com/news/vets/livestock/lessons-to-be-learned-from-foot-and-mouth-outbreaks.
[17] BBC, ‘Ministers face foot-and-mouth censure’, 22 July 2002, http://news.bbc.co.uk/1/hi/uk_politics/2142734.stm.
[18] Official Journal of the European Union, C 31 E/137, 5 February 2004, §34.
[19] Health Protection Agency, ‘Weekly pandemic flu media update’, 10 December 2009, https://web.archive.org/web/20100113215351/http://www.hpa.org.uk/webw/HPAweb&HPAwebStandard/HPAweb_C/1259152450217?p=1231252394302; D. Hine, The 2009 Influenza Pandemic: An Independent Review of the UK Response to the 2009 Influenza Pandemic (London: UK Cabinet Office, 2010).
[20] O. Bowcott and D. Batty, ‘Swine flu: 65,000 deaths is UK’s worst case scenario’, The Guardian, 16 July 2009, https://www.theguardian.com/uk/2009/jul/16/swine-flu-cases-rise-britain.
[21] D. Hine, The 2009 Influenza Pandemic, 3.
[22] N. Taylor, ‘Review of the use of models in informing disease control policy development and adjustment. A report for Defra’, Veterinary Epidemiology and Economics Research Unit, University of Reading (2003), 6.
[23] D. Hine, The 2009 Influenza Pandemic, 7.
[24] D. Hine, The 2009 Influenza Pandemic, 4.
[25] S. Kettell and P. Kerr, ‘”Guided by the science”: (de)politicising the UK government’s response to the coronavirus crisis’, The British Journal of Politics and International Relations, 24, 1 (2022): 11–30.
[26] See, for example, BMA, ‘The public health response by UK governments to COVID-19’, BMA Covid Review (2022): 4.
[27] N. Brusselaers et al., ‘Evaluation of science advice during the COVID-19 pandemic in Sweden’, Humanities and Social Sciences Communications, 9, 91(2022): 4.
[28] T. Olofsson et al., ‘The making of a Swedish strategy: How organizational culture shaped the
Public Health Agency’s pandemic response’, SSM – Qualitative Research in Health 13, 2 (2022): doi.org/10.1016/j.ssmqr.2022.100082.
[29] The Public Health Agency of Sweden, Folkhälsomyndigheten Pandemiberedskap. Hur vi förbereder oss – ett kunskapsunderlag (2019).
[30] J. Craens, K. Frenken and T. Meelen, ‘Mission-oriented innovation policy: The case of the Swedish ‘Vision Zero’ approach to traffic safety’, Papers in Evolutionary Economic Geography, 21.40 (2021): http://econ.geo.uu.nl/peeg/peeg2140.pdf.
[31] The Coronavirus Commission [Coronakommissionen], ‘Summary in English’, SOU 2022:10 (2022): 12.
[32] Coronavirus Commission [Coronakommissionen], ‘Summary in English’, 24.
[33] G. Deleuze and F. Guattari, Nomadology: the War Machine (Seattle: Wormwood Distribution, 2010).
[34] Deleuze and Guattari, Nomadology, 25.
[35] Deleuze and Guattari, Nomadology, 27.
[36] Deleuze and Guattari, Nomadology, 27.
[37] C. Reed and M. Reed, Enough of experts: Expert authority in crisis (Berlin: De Gruyter, 2023), 1, 3.





