U.S. military improved mortality since World War II, but there have been alarming exceptions
New analysis shows that while the survivability of wounds on the battlefield has steadily improved for United States service members since World War II, there were several increases that bucked that trend during subsequent conflicts. By understanding these bumps and making steps to improve readiness between conflicts, troops’ lives could be saved in the future. These insights were published in a special supplement of the Journal of Trauma and Acute Care Surgery focusing on the military.
“This shows us the big picture of combat casualty outcomes from the beginning of World War II through the modern era, and, at the same time, it also provides significant details on the month-to-month outcomes in each individual war,” said the supplement’s editor and this study’s first author, Jeremy Cannon, MD, the Trauma medical director and section chief of Trauma, as well as an associate professor of Surgery at Penn Medicine. “In all, this is good news because our outcomes have improved significantly over time. However, we see that there is still work to be done—specifically in identifying specific areas for improvement and in keeping our medical corps ready for the next conflict.”
The researchers examined several different metrics for this study: The case fatality rate (CFR)—a measure of the total lethality of the battlefield, which is determined by dividing the total number of combat deaths by the number of combat deaths and combat wounded; the killed in action (KIA) rate— the percentage of combatants who died before hospitalization; and the died of wounds (DOW) rate—the percentage of those wounded who died after receiving hospital-level care.
Four different conflicts were studied, each required to be at least three years long to properly assess the data: World War II, the Korean War, and Vietnam War, with Operations Enduring Freedom (Afghanistan conflict) and Iraqi Freedom assessed separately as well as together.
Since the start of World War II, the researchers found significant gains across two of their measures. The combat fatality rate fell from 55 to 12 percent between the start of World War II and the most recent conflicts, as did the KIA rate (52 to 5 percent). These were all numbers that confirmed historic studies looking at the big picture.
However, as the research team dove into the month-to-month outcomes of each conflict, they found instances of major spikes in mortality amid conflicts. In the case of Vietnam, for example, extremely low rates of fatality in the middle of the conflict, approximately 19 percent, rose to 63 percent during the last stages of the war. Cannon and his co-authors have speculated that factors like poor compliance with body armor use and withdrawal of medical assets despite continued combat may have contributed, but this finding also represents an important area for further analysis.
Additionally, the start of each conflict they studied exhibited higher than expected fatality rates, given what was achieved in the previous war. This was determined by examining something called the “observed to expected mortality ratio,” which takes the lowest sustained case fatality rate from the previous conflict and makes it the benchmark for the next. The reasoning for this number is the belief that progress made in a previous war should carry over to the next. In practice, this study showed that is not universally the case.
In every conflict studied, at some point during the first and even into the second year, fatality rates exceeded the previous conflict’s best numbers. Rates for U.S. troops in the first year of World War II and Vietnam were more than triple the expected rate through significant parts of the first year, and Operation Enduring Freedom was twice that of the previous conflict at one point. Although the Korean War stayed close to the expected fatality rate through most of its first two years, it did begin above the benchmark and actually closed its second year above it as well.
The authors called these unexpected increases “the peacetime effect.”
“Most major conflicts are separated by a number of years and, in many ways, you’re starting from scratch at the beginning of each conflict,” Cannon explained. “In the time between wars, those with deployment experience move off into civilian practice, and the lessons learned fade from the military’s collective memory. Then, when the next conflict does occur, many medical personnel have never deployed before and many also aren’t as versed in military history and the experiences of others.”
Something else the researchers discovered that was not expected was that the DOW rate stayed roughly the same across all earlier conflicts, until the recent wars in Iraq and Afghanistan, when it increased.
“Why specifically did the rate of death increase after casualties reached the hospital?” Cannon questioned. “Although this may be an artifact of being able to more rapidly transport those with worse wounds in the modern era, this finding needs to be examined more closely.”
Cannon believes that these phenomena could use further studying to better uncover their triggers. But he has an idea that involves using civilian hospitals as training grounds for military personnel to help solve the lapses in fatality after interwar periods.
Source: Read Full Article