• Cholera
  • Influenza Pandemic
  • Measles
  • Smallpox
  • Typhus
  • Dysentery
  • Malaria
  • Pneumonia
  • Tuberculosis
  • Whooping Cough

List of 10 Deadliest Diseases in History

One of the deadliest diseases in history was fortunately eradicated by 1980 as a result of worldwide vaccination campaign. It is impossible to tell how many people died from smallpox but only during the 20th century, the disease is estimated to claim from 300 to 500 million lives. But despite the fact that no new cases were reported since 1980, there are concerns that it could be used for biological warfare as the smallpox virus is still kept in laboratories in the United States and Russia.

Is Plague Lurking In A Town Near You?

The epidemic of plague in the 14th century was not the only significant plague outbreak recorded in human history. The first reported pandemic broke out in Egypt in 541 and was designated “The Plague of Justinian”. The last major plague event began in the war-torn Yunnan province of China, reaching Hong Kong in 1894.

Even today, plague has not been eradicated, although thanks to the availability of vaccination and antibiotics, few people now die of it. Plague foci still exist in Africa, North and South America, and Asia.

Between 2010 and 2015 there were 3248 cases of plague reported worldwide, including 584 deaths. Most cases have occurred in Madagascar, The Democratic Republic of Congo, and Peru.

From the 1 August through till the 22 November 2017, 2348 confirmed, probable and suspected cases of plague, including 202 deaths (case fatality rate 8.6 %), were reported by the Ministry of Health of Madagascar to the World Health Organization.

In the United states alone, 1040 confirmed or probable cases of plague occurred between 1900 and 2016 80% of which were classified as the bubonic form. In recent years, incidence has ranged from one to seventeen cases (average of seven per year) per year, with most occurring in the rural west.

A Brief Timeline of Celiac Disease

In 2008, an archaeological dig in Cosa, Italy revealed an 18-20-year-old woman from the first century AD, with signs of failure to thrive and malnutrition. The skeleton showed the presence of the celiac gene HLA-DQ2.5 and damage typically seen from celiac disease.

Greek physician and medical writer Aretaeus of Cappadocia clinically describes the first earliest account of celiac disease, which he refers to as “The Coeliac Affection.” He names the disease “koiliakos” after the Greek word “koelia” (abdomen) and described it as thus: “If the stomach be irretentive of the food and if it pass through undigested and crude, and nothing ascends into the body, we call such persons coeliacs.”

  • 1600s: Philosopher Blaise Pascal is believed by some to have suffered from celiac disease.

It’s believed by some that philosopher Blaise Pascal may have suffered from and perhaps died from celiac disease . He is said to have suffered from abdominal pain throughout his childhood that continued and progressed into adulthood. He is said to have also experienced other celiac disease symptoms such as neurological issues, migraines and depression.

  • 1800s: Matthew Baillie describes a diarrheal disorder that improves on a rice-based diet.

British physician and pathologist Matthew Baillie describes a chronic gastrointestinal condition that responded to a rice-heavy diet. He noted in a publication that those who suffered from the disorder experienced chronic diarrhea and malnutrition. He wrote that he’d observed that “some patients have appeared to derive considerable advantage from living almost entirely upon rice.” This rice-heavy diet would most likely be very low in gluten, or even gluten-free, depending on what other ingredients were eaten—which would help those suffering from celiac disease.

  • 1887: Dr. Samuel Gee writes the first modern medical description of celiac disease and hypothesizes it can be treated through diet.

English doctor Samuel Gee says people with “celiac affection” can be cured by diet. Gee first presented the modern definition of celiac disease at a lecture at the Hospital for Sick Children in London. He theorized that the disease needed to be treated through food, saying that he believed if a person were to be cured it would be through their diet. Gee tried multiple types of diets with his patients, including a Dutch mussel diet. However, during his lifetime he was never able to pinpoint which food triggered the disease.

American pediatrician Sidney Haas announces a “ banana diet ” that treats celiac disease after treating children with a diet high in bananas and forbidding starches . Before Dr. Haas’s “banana diet”, more than 30% of children with celiac disease died. Since the diet was gluten-free (albeit unintentionally) and high in calories, it helped children with the disease heal their villi and their lives were saved. Parents from all over the United States brought their children with celiac disease to Dr. Haas to be treated. The banana diet continued to be used to treat some children until the early 1950s. It did have its downsides though, as many believed that once the children were healed that they were “cured” and could go back on a normal, gluten-containing diet, which leads to damaging the villi and a host of other serious side effects.

  • 1940s : Dr. Willem Dicke theorizes that wheat is triggering celiac disease and develops a wheat-free diet to treat celiac disease patients.

Dutch pediatrician Willem Karel Dicke hypothesizes that wheat protein may be the culprit to triggering celiac disease. He made the connection during WWII, when during the Dutch Famine, bread became unavailable in the Netherlands. Dr. Dicke noticed that throughout this time, the mortality rate for celiac disease dropped to zero in his hospital. He went on to develop a wheat-free diet.

The English medical team shared results of studies showing how celiac disease patients improved when wheat and rye flour was removed from their diets. Gluten, the protein found in wheat, barley and rye, was later pinpointed as the exact trigger for celiac disease.

German-British gastroenterologist and medical researcher Margot Shiner discovers a new technique to biopsy intestines. This jejunal biopsy instrument helped in the diagnosis of celiac disease, among other GI disorders. She has been credited with launching the specialty of modern pediatric gastroenterology.

  • 1970s-1990s : Celiac disease is recognized as an autoimmune disease and genes are pinpointed.

In the 1970s, the HLA-DQ2 gene is associated with celiac disease and dermatitis herpetiformis. Then in the 1980s, the connection between celiac disease and autoimmune diseases, such as Type 1 Diabetes, becomes accepted within the medical community. By the early 1990s, celiac disease is accepted as an autoimmune disease with a specific gene (either HLA-DQ2 or HLA-DQ8). While in 1997, The role of the antigen tissue transglutaminase (TtG) in celiac disease is discovered.

Originally named the National Foundation for Celiac Awareness, Beyond Celiac was established as the first celiac disease patient advocacy group dedicated to driving diagnosis and enabling access to gluten-free food. Later, Beyond Celiac pivots to research for treatments and a cure after studies show that a gluten-free diet is not enough for many with celiac disease.

  • 2006: First potential drugs for celiac disease begin the clinical trial process.

Larazotide acetate (formerly known as AT-1001), an eight amino acid peptide, was one of the first potential medical treatments for celiac disease began testing in clinical trials. Since then many have joined the race for treatments for celiac disease and studies continue to show the burden of the gluten-free diet along with the fact many with celiac disease aren’t healing despite following the diet strictly.

Importing Disease

The practice of quarantine began during the 14th century, in an effort to protect coastal cities from plague epidemics. Cautious port authorities required ships arriving in Venice from infected ports to sit at anchor for 40 days before landing — the origin of the word quarantine from the Italian “quaranta giorni”, or 40 days.

One of the first instances of relying on geography and statistical analysis was in mid-19th century London, during a cholera outbreak. In 1854, Dr. John Snow came to the conclusion that cholera was spreading via tainted water and decided to display neighborhood mortality data directly on a map. This method revealed a cluster of cases around a specific pump from which people were drawing their water from.

While the interactions created through trade and urban life play a pivotal role, it is also the virulent nature of particular diseases that indicate the trajectory of a pandemic.

History of Huntington’s Disease

George Huntington (April 9, 1850 – March 3, 1916) was an American physician from Long Island, New York who contributed the clinical description of the disease that bears his name — Huntington’s disease. Dr. Huntington wrote his paper “On Chorea” when he was 22 years old, a year after receiving his medical degree from Columbia University in New York. “On Chorea”was first published in the Medical and Surgical Reporter of Philadelphia on April 13, 1872.

In the 100+ years since the death of George Huntington in 1916, the disorder he described as a ‘medical curiosity’ has become a focus of intense medical and scientific interest, in part because of the contribution of families in generating knowledge about this family disease. As many writers have noted, George Huntington’s own family played a crucial role in defining this illness. What has been less appreciated is that the affected families he described also played a role, in ways that George Huntington himself acknowledged. Not quite 22 years old, just graduated from Columbia University’s College of Physicians and Surgeons in New York City, and with little clinical experience, no established medical practice, and no patients of his own with the disorder, he wrote an account in 1872 that William Osler considered one of the most succinct and accurate portraits of a disease ever written. It was not the earliest medical account of hereditary chorea but it was certainly the most complete. And for social and cultural as well as medical and scientific reasons, it played a far more important role in defining the discrete clinical entity that soon came to be known as ‘Huntington’s chorea’ and by the late 1960s, as ‘Huntington’s disease’.

Despite considerable recognition during his lifetime, George Huntington remained a small town family physician but not a provincial or isolated one. He was aware that his paper had drawn the attention of the medical profession at home and abroad and that this had helped reveal the disease in many parts of the world. He was in touch with some eminent clinicians of his day, including Osler, and an invited speaker on Huntington’s chorea at medical societies such as the influential New York Neurological Society. At a time when medicine was becoming increasingly ‘scientific’, he too placed his hope in research, although he chose not to pursue research himself. Alluding to the unknown pathology of chorea, which had intrigued him from the start, he trusted ‘that science, which has accomplished such wonders through the never-tiring devotion of its votaries, may yet “overturn and overturn, and overturn it,” until it is laid open to the light of day’.

Disease mechanisms

The paper focuses on case studies of four vector-borne diseases – plague, malaria, yellow fever and trypanosomiasis – from 2.6 million years ago to present day. These case studies revealed five mechanisms by which these illnesses shape human society. Below are examples of each:

Killing or debilitating large numbers of people

The plague, caused by bacteria Yersinia pestis, is transmitted by fleas carried by rodents. The Black Death, the most famous plague pandemic, wiped out 30% of Europe’s population in the Middle Ages and drastically changed its economy. The plummet in labor helped overturn the feudal system, allowing surviving serfs to enjoy greater wages and power.

Differentially affecting populations

Yellow fever, a vector-borne disease transmitted by mosquitoes, is closely connected to the enslavement of Black people. On the island of Barbados, the most affluent British colony, English settlers came to rely on slave labor. In 1647, a yellow fever epidemic broke out as slave vessels introduced mosquitoes and the yellow fever virus. Because African people were twice as likely to survive yellow fever due to immunity gained from viral exposure while living in Africa, exploiting their forced labor was especially profitable. As a result, the exploitation of enslaved people grew into Barbados’ main labor system and expanded to other British colonies.

Weaponization of disease to promote hierarchies of power

In ancient Rome, poor agricultural workers worked in low-lying fields and lived in unsanitary housing. This greatly increased their risk of being bit by malaria-infected mosquitoes compared to wealthier Romans. Malaria may have also enforced gender inequities in ancient Rome, as some pregnant women may have been confined indoors to avoid risks associated with malarial infection, including miscarriage and fetal abnormalities.

Catalyzes change in society

In 1793, a yellow fever outbreak struck Philadelphia, killing half of all those afflicted. Although the Philadelphia government did not yet understand how yellow fever was transmitted, they eventually realized that cleaning up dirty water reduced the spread. The illness prompted the city to provide clean drinking water and construct sewage systems for its residents, and in the process lay the foundation of the modern public health system.

Changing human relationships with the land and environment

Trypanosomiasis, carried by the tsetse fly, is a parasite that infects wildlife, livestock and humans in Africa. In Africa’s pre-colonial history, the disease limited the use of domesticated animals in affected areas, preventing intensive farming and large-scale agriculture and impeding the ability to grow economically and urbanize.

“We were taken aback by the extent to which the impacts of vector-borne disease have historically splintered across racial and societal lines,” said Athni.

Structural racism, including what neighborhoods people can live in and their access to intergenerational wealth, is linked to disparities in rates of diabetes, hypertension and other chronic diseases associated with stress, Mordecai explained. These disparities are also apparent in the COVID-19 pandemic, where the disease’s outcomes are more serious for individuals suffering with these conditions. This disproportionate burden further amplifies the vulnerability of already disadvantaged communities.

“When you layer on an emerging pandemic with existing health disparities, it disproportionally affects Black and Hispanic communities,” said Mordecai.

Racial disparities also put historically marginalized communities at greater risk of being exposed to the virus. These communities, for instance, are more likely to be essential workers, lacking the luxury to safely shelter in place or have their groceries delivered.

“It’s easy to think that communities of color aren’t social distancing enough or not practicing proper hygiene,” said Roberts, who is a co-author on the paper. “But that thinking completely neglects the social conditions that have made those communities more vulnerable to begin with.”

The relationship between COVID-19 and structural inequality is unfortunately not limited to just modern times or the U.S. This too is a pattern that has repeated throughout history and across the globe. Outbreaks of leishmaniasis, a vector-borne disease spread by phlebotomine sand flies, have impacted hundreds of thousands of Syrians within refugee camps, a result of overcrowding in areas with poor sanitation. And when the first few cases of the Ebola outbreak popped up in 2014 in Africa, scientists in the United States were slow in finding ways to combat it until it showed up closer to home.

The authors hope that this paper will motivate scientists to be more proactive in protecting people in historically disadvantaged communities from disease.

“The paper does a spectacular job documenting the problem,” said Roberts. “Now it will be important to maintain an interdisciplinary focus that can dismantle it.”

Ancient History of Lyme Disease in North America Revealed with Bacterial Genomes

A team of researchers led by the Yale School of Public Health has found that the Lyme disease bacterium is ancient in North America, circulating silently in forests for at least 60,000 years—long before the disease was first described in Lyme, Connecticut, in 1976 and long before the arrival of humans.

For the first time, the full genomes of the Lyme disease bacterium, Borrelia burgdorferi, were sequenced from deer ticks to reconstruct the history of this invading pathogen.

The finding shows that the ongoing Lyme disease epidemic was not sparked by a recent introduction of the bacterium or an evolutionary change—such as a mutation that made the bacterium more readily transmissible. It is tied to the ecological transformation of much of North America. Specifically, forest fragmentation and the population explosion of deer in the last century have created optimal conditions for the spread of ticks and triggered this ongoing epidemic.

Katharine Walter conducted the research while a doctoral student at Yale School of Public Health and is lead author of the study published in Nature Ecology and Evolution.

“The Lyme disease bacterium has long been endemic,” she said. “But the deforestation and subsequent suburbanization of much of New England and the Midwest created conditions for deer ticks—and the Lyme disease bacterium—to thrive.”

Lyme disease is the most common vector-borne disease in North America. Since it was first described in the 1970s, the disease has rapidly spread across New England and the Midwest. Reported cases of Lyme disease have more than tripled since 1995 and the Centers for Disease Control and Prevention now estimate that more than 300,000 Americans fall ill each year.

The team turned to genomics to reveal the bacterium’s origins. By comparing B. burgdorferi genomes collected from different areas and over a 30-year period, the team built an evolutionary tree and reconstructed the history of the pathogen’s spread.

Researchers collected deer ticks, vectors of B. burgdorferi, from across New England. They focused sampling efforts in areas predicted to be sources of the epidemic—Cape Cod and areas around Long Island Sound. Over 7,000 tick were collected from these areas during the summer of 2013. To extend the spatial scope of the study, collaborators in the South, Midwest, and across Canada contributed ticks to the team.

Using a method the team previously developed to preferentially sequence bacterial DNA (and avoid sequencing only DNA from the tick), the researchers sequenced 148 B. burgdorferi genomes. Earlier studies of the evolutionary history of B. burgdorferi have relied upon short DNA markers rather than full genomes. Reading the one million letters of the full bacterial genome allowed the team to piece together a more detailed history. The team drew an updated evolutionary tree which showed that the bacterium likely originated in the northeast of the United States and spread south and west across North America to California.

Birds likely transported the pathogen long distances to new regions and small mammals continued its spread. Imprinted on the bacterial genomes was also a signature of dramatic population growth. As it evolved, it seemed to have proliferated.

The tree was also far older than the team had expected—at least 60,000 years old. This means that the bacterium existed in North America long before the disease was described by medicine and long before humans first arrived in North America from across the Bering Strait (about 24,000 years ago)

This findings clarify that the bacterium is not a recent invader. Diverse lineages of B. burgdorferi have long existed in North America and the current Lyme disease epidemic is the result of ecological changes that have allowed deer, ticks and, finally, bacterium to invade.

The explosion of deer in the twentieth century into suburban landscapes, free of wolf predators and with strict hunting restrictions, allowed deer ticks to rapidly invade throughout much of New England and the Midwest. Climate change has also contributed. Warmer winters accelerate ticks’ life cycles and allow them to survive an estimated 28 miles further north each year.

Ticks expanded into suburbanized landscapes—full of animals like white-footed mice and robins, excellent hosts for B. burgdorferi. The expansion of ticks into habitats with ideal hosts allowed the bacterium to spread.

Adalgisa Caccone, a lecturer at Yale in Ecology and Evolutionary Biology and a senior research scientist at the School of Public Health, and Maria Diuk-Wasser, of the Department of Ecology, Evolutionary and Environmental Biology at Columbia University, are senior authors. Giovanna Carpi, of the Johns Hopkins School of Medicine, also contributed to the research.

7. Dengue fever

Dengue is a tropical disease that is caused by the dengue virus, which is spread by mosquitoes – especially the Aedes aegypti species. It causes symptoms like high fever, headache, vomiting, muscle and joint pains, and a skin rash, but in some cases, it leads to severe fever, hemorrhagic bleeding, and death.

Luckily, there is now a vaccine for dengue, as well as antiviral drugs being developed to treat it!

While there might have been cases of dengue fever since the 5th century AD, the earliest report of an epidemic is from 1779 when swept across Southeast Asia, Africa, and North America. Since then, until the end of the 20th century, cases were rare. But they have become more frequent due to ecological disruption.

These are some of the major dengue reports throughout history: [8]

Number of Confirmed Cases

1778 Spain dengue fever outbreak

2000 Central America dengue epidemic

2004-06 dengue outbreak in Singapore, India, Indonesia, Pakistan, and the Philippines

2007 dengue fever epidemic in Puerto Rico, Dominican Republic, Mexico

2008 Brazil dengue epidemic

2010 dengue fever epidemic, worldwide

2011 dengue outbreak in Pakistan

2017 dengue outbreak in Sri Lanka

2019–20 dengue fever epidemic


Plague is an ancient disease that was described during Classical times as occurring in North Africa and the Middle East. It is sometimes presumed to be the disease behind several historic epidemics, such as the pestilence described as striking the Philistines in the biblical book of 1 Samuel. Unequivocal evidence for its early existence comes from the discovery of genomic traces of Y. pestis in the teeth of Neolithic farmers in Sweden dated to roughly 4,900 years ago and from analyses of ancient DNA in the teeth of Bronze Age humans, which indicate that Y. pestis was present in Asia and Europe by between 3000 and 800 bce . It is impossible, however, to verify the true nature of these early outbreaks.

The first great plague pandemic to be reliably reported occurred during the reign of the Byzantine emperor Justinian I in the 6th century ce . According to the historian Procopius and others, the outbreak began in Egypt and moved along maritime trade routes, striking Constantinople in 542. There it killed residents by the tens of thousands, the dead falling so quickly that authorities had trouble disposing of them. Judging by descriptions of the symptoms and mode of transmission of the disease, it is likely that all forms of plague were present. Over the next half-century, the pandemic spread westward to port cities of the Mediterranean and eastward into Persia. Christian writers such as John of Ephesus ascribed the plague to the wrath of God against a sinful world, but modern researchers conclude that it was spread by domestic rats, which traveled in seagoing vessels and proliferated in the crowded, unhygienic cities of the era.

The next great plague pandemic was the dreaded Black Death of Europe in the 14th century. The number of deaths was enormous, reaching two-thirds or three-fourths of the population in various parts of Europe. It has been calculated that one-fourth to one-third of the total population of Europe, or 25 million persons, died from plague during the Black Death.

For the next three centuries, outbreaks of plague occurred frequently throughout the continent and the British Isles. The Great Plague of London of 1664–66 caused between 75,000 and 100,000 deaths in a population estimated at 460,000. Plague raged in Cologne and on the Rhine from 1666 to 1670 and in the Netherlands from 1667 to 1669, but after that it seems to have subsided in western Europe. Between 1675 and 1684 a new outbreak appeared in North Africa, Turkey, Poland, Hungary, Austria, and Germany, progressing northward. Malta lost 11,000 persons in 1675, Vienna at least 76,000 in 1679, and Prague 83,000 in 1681. Many northern German cities also suffered during this time, but in 1683 plague disappeared from Germany. France saw the last of plague in 1668, until it reappeared in 1720 in the port city of Marseille, where it killed as many as 40,000 people.

After those last outbreaks, plague seems to have disappeared from Europe, with the exception of an area at the Caucasus border. Various explanations have been offered: progress in sanitation, hospitalization, and cleanliness a change in domestic housing that excluded rats from human dwellings abandonment of old trade routes and a natural quiescent phase in the normal rise and decline of epidemic diseases. Although some of those factors may have been at work, many of those explanations were premised on the notion that plague had become firmly established in black rat populations in Europe. But whereas the plague bacterium had disappeared from much of the continent, rats remained. Modern research has suggested that plague arrived in Europe via maritime trade routes from Central Asia—namely, those that comprised part of the Silk Road. The disease may have arrived in waves, having been reimported multiple times, as a result of climate fluctuations that affected rodent populations in Asia.

At the time of the plague outbreaks in Europe, the disease was poorly understood from a medical standpoint, as the very concept of an infectious organism was unknown. As late as 1768 the first edition of the Encyclopædia Britannica repeated the commonly held scientific notion that plague was a “pestilential fever” arising from a “poisonous miasma,” or vapour, that had been brought “from eastern countries” and was “swallowed in with the air.”

The pestilential poison disturbs all the functions of the body for unless it be expelled to the external parts, it is certainly fatal.

Expulsion of the poison was thought to be best accomplished by either natural rupture of the buboes or, if necessary, lancing and draining them. Other recommended means were bloodletting, sweating, induction of vomiting, and loosening of the bowels.

During the 18th and early part of the 19th century, plague continued to prevail in Turkey, North Africa, Egypt, Syria, and Greece. Once it was a maxim that plague never appeared east of the Indus River, but during the 19th century it afflicted more than one district of India: in 1815 Gujarat, in 1815 Sind, in 1823 the Himalayan foothills, and in 1836 Rajasthan. These outbreaks merely set the stage for the third great plague pandemic, which is thought to have gained momentum in Yunnan province, southwestern China, in the 1850s and finally reached Guangzhou (Canton) and Hong Kong in 1894. These port cities became plague-distribution centres, and between 1894 and 1922 the disease spread throughout the whole world, more widely than in any preceding pandemic, resulting in more than 10 million deaths. Among the many points infected were Bombay in 1896, Calcutta in 1898, Cape Town and San Francisco in 1900, Bangkok in 1904, Guayaquil (Ecuador) in 1908, Colombo (Sri Lanka) in 1914, and Pensacola (Florida) in 1922. Almost all the European ports were struck, but, of all the areas affected, India suffered the most.

The third plague pandemic was the last, for it coincided with (and in some cases motivated) a series of achievements in the scientific understanding of the disease. By the end of the 19th century, the germ theory of disease had been put on a sound empirical basis by the work of the great European scientists Louis Pasteur, Joseph Lister, and Robert Koch. In 1894, during the epidemic in Hong Kong, the organism that causes plague was isolated independently by two bacteriologists, the Frenchman Alexandre Yersin, working for the Pasteur Institute, and the Japanese Kitasato Shibasaburo, a former associate of Koch. Both men found bacteria in fluid samples taken from plague victims, then injected them into animals and observed that the animals died quickly of plague. Yersin named the new bacillus Pasteurella pestis, after his mentor, but in 1970 the bacterium was renamed Yersinia pestis, in honour of Yersin himself.

It remained to be determined how the bacillus infected humans. It had long been noticed in many epidemic areas that unusual deaths among rats preceded outbreaks of plague among humans, and this link was particularly noted in the outbreaks in India and China. The relationship was so striking that in 1897 Japanese physician Ogata Masanori described an outbreak on Formosa as “ratpest” and showed that rat fleas carried the plague bacillus. The following year Paul-Louis Simond, a French researcher sent by the Pasteur Institute to India, announced the results of experiments demonstrating that Oriental rat fleas (Xenopsylla cheopis) carried the plague bacillus between rats. It was then demonstrated definitively that rat fleas would infest humans and transmit plague through their bites. With that, massive rat-proofing measures were instituted worldwide in maritime vessels and port facilities, and insecticides were used in areas where plague had broken out. Beginning in the 1930s, sulfa drugs and then antibiotics such as streptomycin gave doctors a very effective means of attacking the plague bacillus directly.

The effectiveness of these measures is told in the declining numbers of plague deaths over the following decades. From a maximum of more than one million in 1907, deaths dropped to approximately 170,000 per year in 1919–28, 92,000 in 1929–38, 22,000 in 1939–48, and 4,600 in 1949–53. Plague is no longer an epidemic disease of port cities. It is now mainly of campestral or sylvatic (that is, open-field or woodland) origin, striking individuals and occasionally breaking out in villages and rural areas where Yersinia is kept in a constant natural reservoir by various types of rodents, including ground squirrels, voles, and field mice.

In the 21st century plague was relatively rare. From 2010 to 2015 just 3,248 cases of plague, with 584 deaths, were documented worldwide. The main regions of plague included western North America the Andes region and Brazil in South America a broad band across Southwest, Central, and Southeast Asia and eastern Africa. By 2020 most cases occurred in Madagascar, Peru, and the Democratic Republic of the Congo.

With the rise of global terrorism, plague has come to be seen as a potential weapon of biological warfare. During World War II Japan is said to have spread Yersinia-infected fleas in selected areas of China, and during the Cold War the United States and the Soviet Union developed means for spreading Yersinia directly as an aerosol—a particularly efficient way to infect people with lethal pneumonic plague. Such an attack might cause a high casualty rate in only limited areas, but it might also create panic in the general population. In response, some governments have developed plans and stockpiled medications for dealing with emergency outbreaks of plague.

The Editors of Encyclopaedia Britannica This article was most recently revised and updated by Kara Rogers, Senior Editor.

Historical Perspectives History of CDC

CDC, an institution synonymous around the world with public health, will be 50 years old on July 1. The Communicable Disease Center was organized in Atlanta, Georgia, on July 1, 1946 its founder, Dr. Joseph W. Mountin, was a visionary public health leader who had high hopes for this small and comparatively insignificant branch of the Public Health Service (PHS). It occupied only one floor of the Volunteer Building on Peachtree Street and had fewer than 400 employees, most of whom were engineers and entomologists. Until the previous day, they had worked for Malaria Control in War Areas, the predecessor of CDC (Figure_1), which had successfully kept the southeastern states malaria-free during World War II and, for approximately 1 year, from murine typhus fever. The new institution would expand its interests to include all communicable diseases and would be the servant of the states, providing practical help whenever called.

Distinguished scientists soon filled CDC's laboratories, and many states and foreign countries sent their public health staffs to Atlanta for training. Any tropical disease with an insect vector and all those of zoological origin came within its purview. Dr. Mountin was not satisfied with this progress, and he impatiently pushed the staff to do more. He reminded them that except for tuberculosis and venereal disease, which had separate units in Washington, D.C., CDC was responsible for any communicable disease. To survive, it had to become a center for epidemiology.

Medical epidemiologists were scarce, and it was not until 1949 that Dr. Alexander Langmuir arrived to head the epidemiology branch. He saw CDC as "the promised land," full of possibilities. Within months, he launched the first-ever disease surveillance program, which confirmed his suspicion that malaria, on which CDC spent the largest portion of its budget, had long since disappeared. Subsequently, disease surveillance became the cornerstone on which CDC's mission of service to the states was built and, in time, changed the practice of public health.

The outbreak of the Korean War in 1950 was the impetus for creating CDC's Epidemic Intelligence Service (EIS). The threat of biological warfare loomed, and Dr. Langmuir, the most knowledgeable person in PHS about this arcane subject, saw an opportunity to train epidemiologists who would guard against ordinary threats to public health while watching out for alien germs. The first class of EIS officers arrived in Atlanta for training in 1951 and pledged to go wherever they were called for the next 2 years. These "disease detectives" quickly gained fame for "shoe-leather epidemiology" through which they ferreted out the cause of disease outbreaks.

The survival of CDC as an institution was not at all certain in the 1950s. In 1947, Emory University gave land on Clifton Road for a headquarters, but construction did not begin for more than a decade. PHS was so intent on research and the rapid growth of the National Institutes of Health that it showed little interest in what happened in Atlanta. Congress, despite the long delay in appropriating money for new buildings, was much more receptive to CDC's pleas for support than either PHS or the Bureau of the Budget.

Two major health crises in the mid-1950s established CDC's credibility and ensured its survival. In 1955, when poliomyelitis appeared in children who had received the recently approved Salk vaccine, the national inoculation program was stopped. The cases were traced to contaminated vaccine from a laboratory in California the problem was corrected, and the inoculation program, at least for first and second graders, was resumed. The resistance of these 6- and 7-year-olds to polio, compared with that of older children, proved the effectiveness of the vaccine. Two years later, surveillance was used again to trace the course of a massive influenza epidemic. From the data gathered in 1957 and subsequent years, the national guidelines for influenza vaccine were developed.

CDC grew by acquisition. The venereal disease program came to Atlanta in 1957 and with it the first Public Health Advisors, nonscience college graduates destined to play an important role in making CDC's disease-control programs work. The tuberculosis program moved in 1960, immunization practices and the MMWR in 1961. The Foreign Quarantine Service, one of the oldest and most prestigious units of PHS, came in 1967 many of its positions were soon switched to other uses as better ways of doing the work of quarantine, primarily through overseas surveillance, were developed. The long-established nutrition program also moved to CDC, as well as the National Institute for Occupational Safety and Health, and work of already established units increased. Immunization tackled measles and rubella control epidemiology added family planning and surveillance of chronic diseases. When CDC joined the international malaria-eradication program and accepted responsibility for protecting the earth from moon germs and vice versa, CDC's mission stretched overseas and into space.

CDC played a key role in one of the greatest triumphs of public health: the eradication of smallpox. In 1962 it established a smallpox surveillance unit, and a year later tested a newly developed jet gun and vaccine in the Pacific island nation of Tonga. After refining vaccination techniques in Brazil, CDC began work in Central and West Africa in 1966. When millions of people there had been vaccinated, CDC used surveillance to speed the work along. The World Health Organization used this "eradication escalation" technique elsewhere with such success that global eradication of smallpox was achieved by 1977. The United States spent only $32 million on the project, about the cost of keeping smallpox at bay for 2-1/2 months.

CDC also achieved notable success at home tracking new and mysterious disease outbreaks. In the mid-1970s and early 1980s, it found the cause of Legionnaires disease and toxic-shock syndrome. A fatal disease, subsequently named acquired immunodeficiency syndrome (AIDS), was first mentioned in the June 5, 1981, issue of MMWR. Since then, MMWR has published numerous follow-up articles about AIDS, and one of the largest portions of CDC's budget and staff is assigned to address this disease.

Although CDC succeeded more often than it failed, it did not escape criticism. For example, television and press reports about the Tuskegee study on long-term effects of untreated syphilis in black men created a storm of protest in 1972. This study had been initiated by PHS and other organizations in 1932 and was transferred to CDC in 1957. Although the effectiveness of penicillin as a therapy for syphilis had been established during the late 1940s, participants in this study remained untreated until the study was brought to public attention. CDC also was criticized because of the 1976 effort to vaccinate the U.S. population against swine flu, the infamous killer of 1918-19. When some vaccinees developed Guillain-Barre syndrome, the campaign was stopped immediately the epidemic never occurred.

As the scope of CDC's activities expanded far beyond communicable diseases, its name had to be changed. In 1970 it became the Center for Disease Control, and in 1981, after extensive reorganization, Center became Centers. The words "and Prevention" were added in 1992, but, by law, the well-known three-letter acronym was retained. In health emergencies CDC means an answer to SOS calls from anywhere in the world, such as the recent one from Zaire where Ebola fever raged.

Fifty years ago CDC's agenda was noncontroversial (hardly anyone objected to the pursuit of germs), and Atlanta was a backwater. In 1996, CDC's programs are often tied to economic, political, and social issues, and Atlanta is as near Washington as the tap of a keyboard (Figure_2). Adapted for MMWR by Elizabeth W. Etheridge, Ph.D., from her book, Sentinel for Health: A History of the Centers for Disease Control. Berkeley, California: University of California Press, 1992.

Editorial Note

Editorial Note: When CDC's name changed in 1970, from the Communicable Disease Center to the Center for Disease Control, CDC scientists were poised to accept new challenges. The most notable of the agency's many achievements in the following 10 years was its role in global smallpox eradication, a program that finally succeeded because of the application of scientific principles of surveillance to a complex problem. In the realm of infectious diseases, CDC maintained its preeminence, identifying the Ebola virus and the sexual transmission of hepatitis B, and isolating the hepatitis C virus and the bacterium causing Legionnaires disease. The Study of the Effectiveness of Nosocomial Infection Control (SENIC) was the most expensive study the agency had ever undertaken and proved for the first time the effectiveness of recommended infection-control practices. Other studies included identification of the association of Reye syndrome with aspirin use, the relation between liver cancer and occupational exposure to vinyl chloride, and the harmful effects of the popular liquid protein diet.

The 1980s institutionalized what is considered to be a critically important scientific activity at CDC -- the collaboration of laboratorians and epidemiologists. The decade began with the national epidemic of toxic-shock syndrome, documentation of the association with a particular brand of tampons, and the subsequent withdrawal of that brand from the market. CDC collaboration with the National Center for Health Statistics (NCHS) resulted in the removal of lead from gasoline, which in turn has markedly decreased this exposure in all segments of the population. The major public health event of the 1980s was the emergence of AIDS. CDC helped lead the response to this epidemic, including characterization of the syndrome and defining risk factors for disease.

CDC became involved in two very large epidemiologic studies during the 1980s. First, the Cancer and Steroid Hormone Study conducted in collaboration with the National Cancer Institute assessed the risks for breast, cervical, and ovarian cancers associated with both oral contraceptives and estrogen replacement therapy. Second, at the request of Congress, CDC undertook a series of studies of the health effects of service in Vietnam on veterans and their offspring, which led to a landmark contribution of the laboratory -- the development of a serum test for dioxin able to measure the toxicant in parts per quadrillion. This decade also introduced scientifically based rapid assessment methods to disaster assistance and sentinel health event surveillance to occupational public health. Epi Info, a software system for the practice of applied epidemiology, was introduced and now has been translated into 12 languages for tens of thousands of users globally. Finally, during the 1980s, NCHS was moved to CDC, further enhancing CDC's information capabilities to meet national needs.

The 1990s have been characterized by continuing applications of CDC's classic field-oriented epidemiology, as well as by the development of new methodologies. For example, the disciplines of health economics and decision sciences were merged to create a new area of emphasis -- prevention effectiveness -- as an approach for making more rational choices for public health interventions. In 1993, the investigation of hantavirus pulmonary syndrome required a melding between field epidemiology and the need for sensitivity to and involvement of American Indians and their culture. Similarly, the response to global problems with Ebola virus and plague underscore the importance of adapting these new methodologies. Other major CDC contributions to the world's health include global polio eradication efforts and efforts to prevent neural tube defects. Finally, in October 1992, Congress changed CDC's official name to the Centers for Disease Control and Prevention, to recognize CDC's leadership role in prevention. Today, CDC is both the nation's prevention agency and a global leader in public health. As the world enters the new millennium, CDC will remain the agency ready to address the challenges to its vision of healthy people in a healthy world through prevention.

Editorial Note by: Office of the Director, Epidemiology Program Office, CDC.

Watch the video: Οι TOP 10 πιο Σπάνιες Ασθένειες και Ιατρικές Παθήσεις