Tomorrow, I’ll be writing about the use of the newest and greatest blood product: whole blood. Wait, isn’t that what we started out a hundred years ago? How is it that we are even debating the use of blood component therapy vs whole blood? Most living trauma professionals only remember a time when blood components have been infused based on which specific ones were needed.
Prior to about 1900, blood transfusion was a very iffy thing. Transfusions from animals did not go well at all. And even from human to human, it seemed to work well at times but failed massively at others. In 1900, Landsteiner published a paper outlining the role of blood groups (types) which explained the reasons for these successes and failures. With the advent of blood storage solutions that prevented clotting, whole blood transfusion became the standard treatment for hemorrhage in World War I.
When the US entered World War II, it switched to freeze-dried plasma because of the ease of transport. However, it quickly became clear that plasma-only resuscitation resulted in much poorer outcomes. This led to the return to whole blood resuscitation. At the end of WWII, 2000 units of whole blood were being transfused per day.
In 1965, fractionation of whole blood into individual components was introduced. This allowed for guided therapy for specific conditions unrelated to trauma. It became very popular, even though there were essentially no studies of efficacy or hemostatic potential for patients suffering hemorrhage. The use of whole blood quickly faded away in both civilian and military hospitals.
The use of fresh whole blood returned for logistical reasons in the conflicts in Iraq and Afghanistan. A number of military studies were carried out that suggested improved outcomes when using whole blood in place of blood that has been reconstituted from components. That leads us to where we are today, rediscovering the advantages of whole blood.
And that’s what I’ll review tomorrow!