Biotechnology can pose a global catastrophic risk in the form of natural pathogens or novel, engineered ones. Such a catastrophe may also be brought about by usage in warfare, terrorist attacks or by accident. Terrorist applications of biotechnology have historically been infrequent. To what extent this is due to a lack of capabilities or motivation is not resolved. It is believed, that exponential growth has been observed in the biotechnology sector and some scientists (Noun and Chyba) predict that this will lead to major increases in biotechnological capabilities in the coming decades. They argue that risks from biological warfare and bioterrorism are distinct from nuclear and chemical threats because biological pathogens are easier to mass-produce and their production is hard to control (especially as the technological capabilities are becoming available even to individual users).
Phil Torres is pessimistic on the measures aimed at mitigating the risks of artificial pandemics. In his article “How likely is an existential catastrophe?”, he says, “this trend is indicative of biotechnological development in general: laboratory equipment is becoming cheaper, processes are increasingly automated, and the Internet contains a growing number of complete genomes, including genetic sequences of Ebola and smallpox. The result is that the number of people capable of designing, synthesizing, and dispersing a weaponized microbe will almost certainly increase in the coming decades. Thus biotechnology (and its younger sibling, synthetic biology) will likely become a more significant risk later in the 21st century” (Torres, 7/9/2016).
More risks stemming from novel, engineered pathogens can be expected in the future. Scientists suspect that there is an upper limit on the virulence (deadliness) of naturally occurring pathogens (SA, March 1996). But pathogens may be intentionally or unintentionally genetically modified to change virulence and other characteristics. One example of that is what happened to Australian researchers who unintentionally changed characteristics of the mouse pox virus while trying to develop a virus to sterilize rodents. The modified virus became highly lethal even in vaccinated and naturally resistant mice. The technological means to genetically modify viruses’ characteristics are likely to become more widely available in the future, if not properly regulated (Noun, et al., 2008).
We should look at the danger of self-replicating synthetic, incurable viruses from a particular angle – the rogue researcher syndrome. One possibility is that a disgruntled individual might steal a virus and travel around the world releasing it. An important factor in the motives of such a person might be his religious or cult-like convictions that might, in his mind, justify the act (a mass murder, like ISIS, but on a global scale). This risk is even more significant if one considers that biotech laboratories usually have no provision for psychological profiling of their employees who could be either a lone disgruntled individual (i.e. perversely a member of security staff) or a laboratory researcher.
Genetic engineering of new super-organisms could be enormously beneficial for humanity. But it might go horribly wrong, with the emergence and release of an engineered pathogen, accidentally, or through an act of war, targeting humans, or a crucial part of the global ecosystem. The impact of such an attack could be even worse than any conceivable natural pandemic.