I love Crash Course

My next series is going to be a synthesis series. I'm going to review anatomy, physiology, histology, cell and molecular biology, biochemistry, and the oxygen and carbon cycles in the world around us. It sounds like a lot, I know (trust me, I know - I'm the one writing it!) but my goal, by the time I'm done, is that you can follow your breath in and out of your body, and see all the amazing things it does along the way, and follow the same breath in and out of the world around you, which means we'll also review what happens to the food you eat. It turns out that we are all connected, you and I, to this planet we walk upon.

Now, I have a fat stack of text books beside me as I write this, all of which I will be using to assemble what I learned in my molecular biology degree into an easy-to-follow review of this complex, interwoven process. You don't want the same stack of books I have, and you don't want to do the reading. Let's be honest - I'm not sure I do, either. So after I gathered my stack of books, I turned to my favorite series on the internet to do a brief (ha!) review.

Crash Course was created by vlogbrothers John and Hank Green, and the series I'm linking to for this particular post is all hosted by Hank - he's the scientist in the family. In Crash Course Biology, he reviews the biological systems in the world around us. We may use much of this material for this synthesis (or putting together) of systems, but there's probably stuff I won't get into as much, like the comparative anatomy. When it comes to the anatomy, I'll lean on Crash Course Anatomy and Physiology. His Crash Course Chemistry will be good to understand some of the biochemistry involved (though he never directly reviews biochemistry), and Crash Course Ecology is almost certain to help tie the planet into this review of our systems.

I invite you to pick a series and dive in. While you're waiting for me to get things moving, this will help you have a better understanding, a better visualization, than you might currently have. And for those of you in biology programs, it might just help on those pesky finals...

NB: While Crash Course and SciShow both have Patreon pages to support them, this material is and always will be free to viewers. If you like it enough to support them, go for it (we do), but if you just can't, don't worry - it's still here for you to view! And no, I'm not getting paid for this endorsement - not that I'd object. ;)

SuperChickens?

In this final part of my series on antibiotic resistance, I want to discuss the use of antimicrobials in the food supply. If you need to review other areas of antibiotic resistance, check out “Discussing the Disappearing Miracle” (a lesson in what antibiotic resistance is and is not), “Quitting When You’re Not Really Ahead“ (how people accidentally contribute to antibiotic resistance), and “No, the Z-Pack Won’t Treat The Flu” (how overprescription of antibiotics contributes to resistance). In this article, I’ll focus on how antibiotics are used in the growth of animals destined for consumption, what that does in terms of producing resistance, and what we can do in response.

I know the article is called “SuperChickens?”, but I actually want to start by talking turkey. If you live in the United States, you’ve likely seen the president pardoning a turkey on Thanksgiving Day. This is an old tradition - you can see a photograph of Kennedy pardoning a turkey next to a similar photo of Obama doing the same thing 50 years later (1). What’s remarkable in this photo is the difference in the two birds. Kennedy’s turkey is much closer in size to wild turkeys (2), which usually weigh between 7.6 kg (toms) and 4.26 kg (hens), maxing out at 16.85kg (3). In comparison, the turkeys that grace most tables average 13.5 kg, maxing out at 39 kg (4). Wild turkeys are half the mass of modern, domesticated turkeys (2). Nor did that change happen by accident.

Until the 1950s, most turkeys were similar to the wild birds. However, with the arrival of antibiotics, the average size of the bird began to change. While this was initially done through selective breeding, the demand for meat incentivized not only breeding for size, but also speed from egg to adult. The demand for a bigger bird faster drove competition. When it was discovered that antibiotic use increased the growth rate of chicks in 1948 (chicks given antibiotics grew larger, faster, than those not given antimicrobials), it helped create a new market for the new drugs (5). Since faster growth resulting in bigger birds was the desired outcome, the animals’ feed was soon supplemented with antibiotics.

I know what you’re wondering - who even thought that feeding antibiotics to chickens was a good or necessary idea? It turns out that the introduction of antibiotics was an accident. Researchers were studying other ways to supplement growth, focusing on vitamin B12 (which includes cobalt, a trace metal important for red blood cell development, neurological function, and DNA synthesis) (6). The researchers were looking for different sources of B12, and one easily available source used was the cellular remains of Streptomyces auerofaciens (5). These bacteria were used to develop the tetracycline antibiotic aureomycin, and the cellular remains were what was left when the antibiotics were extracted from the bacteria. They used it because it was an amazing source of the vitamin for a very low cost - it was waste from another process already being done. Another source of B12 was beef liver. Researchers discovered that the chicks given bacterial remains grew 24% faster than the chicks given liver. While it wasn’t initially clear that the antibiotic residue in the cellular remains caused the growth, the vitamin was eventually ruled out as the cause of improved growth (5).

Suddenly, agriculture had an easy way to improve their product - they could grow animals faster, larger, which meant they could use less feed - the sooner an animal was an adult, the sooner it could be sent to market. Since the initial doses of antibiotics were accidental and very low, antibiotics for growth promotion also use very low doses. As a result, the bacterial population in animals are exposed to the drugs used to treat an infection in a sick animal, but over the entire course of their life. This establishes an excellent environment for the bacteria to adapt to the drug and become resistant to it.

See, resistance occurs when a bacteria is exposed to a drug, but not all of the bacteria are killed by the drug. The weakest bacteria die off, leaving those not actually susceptible. During clinical dosing of antibiotics (when used to treat an infection), high doses are used over a short course. This makes it more difficult for the bacteria to adapt. The high dose is more likely to eliminate more of the bacteria, and the short course, or amount of time involved actually taking the medication, means that any resistant bacteria don’t remain exposed to the drug long-term. That gives our rapidly multiplying bacterial population no opportunity to select for resistance. Instead, as the resistant bugs die off, random chance re-enters the evolutionary picture - there’s nothing present to make the resistant bugs more likely to survive and reproduce, so there’s no benefit to resistance.

But when the exposure is low, more of the bacteria survive, giving a larger population the ability to adapt (versus the much smaller population of resistant bacteria that exist after the rest are killed off). What makes this worse is when that exposure occurs over a long period of time. The benefit of remaining resistant continues, which ensures that the larger population is more likely to retain resistance. Random chance is thus limited when mutations occur - the pressure to remain resistant persists within the bacterial community, producing more resistant bacteria in greater numbers.

Growth promotion isn't the only use for antibiotics in agriculture. When one or more animals is ill, or when stress is high within the population (such as weaning the young or transporting them), farmers use antibiotics to prevent illness in the entire community. This preventative use makes use of higher doses than those used for growth promotion, but still lower than needed to treat active infection. This subclinical dosing (lower than needed to treat an infection) in both cases increases the exposure of bacteria to the same drugs used to treat disease in both animals and humans. While this preventative use need not be used over an extended period time, not all farms as judicious as they could be in their use. However, this use of antibiotics creates a similar selection pressure on the bacterial populations within the animal - it's still high enough to eliminate the most susceptible bacteria, but because no active infection is present, the animals' immune system never activates to eliminate the remaining, resistant population. Worse, because this dose is higher than the one used for growth promotion, the percentage of the population that remains is made up of mostly resistant bacteria (as opposed to resistant and non-susceptible bacteria that remain with the very low doses involved in growth promotion).

The least controversial use of antibiotics in food production occurs when an animal is actually ill. In these cases, sick animals are given clinical doses of antibiotics, just as the rest of us are. Since the animal is reliant on another to dose them, either in feed, water, or via injection, the risk of forgetting a dose is reduced. Some farmers do this by adding the medication to the water supply, but if it is only supplied to the infected animals, the risk of resistance drops. Since it’s illegal (and unprofitable) to sell sick animals, very few object to clinical uses in these settings (although some farms ban the use of all antibiotics, or won’t sell animals that required treatment).

OK, so animals can develop resistant bacteria just like people do. You may be asking yourself why that matters. We can’t give animals cold medicine when they get sick - surely we’re not using “people” medicine for animals, right? Wrong. In fact, farmers are more likely to use the inexpensive generic drugs that are less beneficial for human use due to increased resistance. This might not seem like a problem - if we can’t use them anyway, why not get some benefit? Sadly, some of these old medications are held in reserve to treat bacterial infections that are resistant to almost all antibiotics, but that were never exposed to these older drugs. As a result, using these “last line” drugs can create bacteria that are not only resistant to the drugs commonly used to treat infections, but also to these older drugs (7,8).

You’re wondering how, if sick animals can’t be sold, resistant bacteria transfer from healthy animals to people. There are a few ways, but nearly all of them are tied to food safety practices. The possibly grossest method is via feces: the animal passes stool, and water washes the resistant bacteria into a water source. This contaminated water is then used to water fresh vegetables that are likely not cooked before consumption. In fact, this is exactly how the Escherichia coli outbreak in 2006 occurred. A cattle farmer leased land to a spinach farmer, which became contaminated by infected cattle feces in the water supply. Because spinach is often consumed raw, the bacteria were able to reproduce without adequate control and without being eliminated when the food was cooked.

That last point - that the food wasn’t cooked - is the key to most of the remaining transferrals. Contaminated raw meats can spread bacteria to people as well. Meat that isn’t cooked to a temperature that kills the bacteria can lead to consumption of viable bacteria. This is why menus note that eating meat or eggs can cause problems in certain groups - undercooked eggs are another possible source of viable bacteria. But even if you’re careful to always cook your food to the right temperature, if you don’t cool it correctly and keep it out of the danger zone (4° - 60° Celsius), the bacteria can grow to a dangerous population after cooking. More than that, putting hot food into the refrigerator or freezer can raise the temperature of surrounding foods (including those that are pre-cooked) long enough to allow bacterial growth.

“Oh,” you say, “but I’m always careful to have my meat well done, my eggs over-hard, and to put leftovers away immediately without letting them heat surrounding food.” That’s awesome, but you’re not out of danger yet. There’s still cross-contamination to consider. This can occur if you cut or handle raw meat with the same hands or tools that you then use to handle fresh, uncooked food. This is why cutting boards for designated purposes have increased in popularity - keeping your raw chicken on one board, your red meat on another, your fruit and veg on yet another, helps reduce the risk of putting fruits and veggies into a pool left behind by raw meat. But if you don’t change your knife or wash your hands, you may still have cross contamination issues.

Cross-contamination can even occur before you bring your food home. If your meat and your fresh food aren’t stored correctly, the meat may contaminate the fresh food in your supermarket buggy or in your refrigerator. Meat stored above a crisper drawer may leak into the crisper drawer, especially if it isn’t wrapped well. Food put into the fridge without cleaning where the raw meat had been can then be infected as well. It’s also possible that food handlers (before it ever reached you) could be the cause of cross contamination.

Once you’ve consumed the contaminated food, the bacteria have the perfect host in which to grow and reproduce. As they grow, they interact with other bacteria in your body (remember the lesson in “Discussing the Disappearing Miracle”). The plasmids that contain the genes for resistance are then shared with bacteria present in your body, and now those bacteria are resistant to the drugs coming from the animal population.

Many have suggested that this sort of cross contamination between agricultural and human bacteria is incredibly unlikely. Sadly, a recent study in China (7,8) illustrates that resistant bacteria in animals are present in food and have caused disease in humans. Worse, the drug resistance is to a drug-of-last-resort. Colistin is an old antibiotic (developed in 1959), meaning it is now available in a generic formulation and thus cheap. It was also not widely used in humans due to the tendency to cause kidney problems, limiting the ability of bacteria common to humans to develop resistance to it. Because it is cheap, colistin has been widely used in agriculture, particularly in Asia, where it makes up 73.1% of colistin production. However, because so few things have had opportunity to develop resistance, infections that are resistant to other treatments are treated with colistin (when your choices are maybe develop kidney problems or die of the bacterial infection, medical professionals tend to opt for risking the kidney problems over death).

The study in China found colistin resistant bacteria in animals, raw meat in stores, and in 1% of hospital treated infections. Worse, this resistance has already spread from China and is now present in Malaysia. This means that patients are already seriously ill from antibiotic resistant infections that are also resistant to our last defense (7). We’re already seeing the first waves of a time when antibiotics may no longer be available. And while 1% may not seem alarming, remember that resistance spreads fast because bacteria share.

At the start of this article, I suggested that I would include information on how you, as an ordinary person, can help fight antibiotic resistance from agricultural use. I’ve already told you that safe food handling can help prevent the spread of bacteria to you and those you love, but that’s only one way to fight this growing threat. Many companies are already taking the steps necessary - Denmark (9) has outlawed the use of antibiotics in animals destined for market, and two turkey producers (10) have outlawed them either entirely or for subclinical usage. You can help make it more profitable for companies to take the longer road to growth by buying from trusted brands or demanding that your favorite brands eliminate subclinical use. You can demand better living conditions for animals bred for market - I didn’t even discuss how the terrible living conditions trigger preventative use of antibiotics or lead to sicker animals. The eggs that came from free-range hens are far less likely to have had antibiotics, because those hens are less likely to need them. But free-range hens require more land and more time and more food to grow, which increases the cost to the producer and the consumer.

Antibiotic resistance didn’t happen overnight. Many smart people are working on how to solve it, to keep our miracle intact for generations to come. Fixing a problem this big isn’t going to be easy or cheap. But you can help. You can demand that your food be antibiotic free, you can insist on only taking antibiotics when they’re actually necessary, and you can take every pill on time, to the end, even when you feel better (by the way, that’s actually a decent test to determine if your infection is viral or bacterial: viral infections last 7-10 days before the immune system can wipe them out. Bacterial infections treated by antibiotics will improve in a day or two. So if your doctors writes you a script for antibiotics, and you take them, and you aren’t better in a day or two, odds are your infection wasn’t bacterial. I give you permission to remind your doctor about the risks of antibiotic resistance). You can also educate others, like I did here. Understand the risks, do the hard work to help reduce them, and encourage others to do the same. Together, we might just be able to win.

NB: I included not only the sources I cited here, but also several that I used as I prepared this article. Watch for a video from “In A Nutshell” to explain this very topic, as well. It isn’t cited here, but it’s coming.

Sources:

  1. http://www.businessinsider.com/how-big-turkeys-were-then-and-now-2015-11
  2. http://www.motherjones.com/environment/2014/11/turkey-bigger-thanksgiving-butterball-antibiotics
  3. https://en.wikipedia.org/wiki/Wild_turkey
  4. https://en.wikipedia.org/wiki/Domesticated_turkey
  5. http://amrls.cvm.msu.edu/pharmacology/antimicrobial-usage-in-animals/non-therapuetic-use-of-antimicrobials-in-animals/use-of-antibiotics-in-animals-for-growth-promotion
  6. https://ods.od.nih.gov/factsheets/VitaminB12-HealthProfessional/
  7. http://phenomena.nationalgeographic.com/2015/11/21/mcr-gene-colistin/
  8. http://www.thelancet.com/journals/laninf/article/PIIS1473-3099(15)00424-7/abstract
  9. http://www.cdc.gov/drugresistance/threat-report-2013/
  10. https://consumermediallc.files.wordpress.com/2015/11/turkey_report_final.pdf
  11. http://www.tandfonline.com/doi/full/10.1080/03079450903505771
  12. https://www.avma.org/KB/Resources/FAQs/Pages/Antimicrobial-Use-and-Antimicrobial-Resistance-FAQs.aspx
  13. http://scienceblogs.com/aetiology/2014/05/28/what-is-the-harm-in-agricultural-use-antibiotics/
  14.  

Blood Type Bonanza

This Thursday brings you a variety of information about blood types: First, there's an article from Wellcome Trust's Mosaic Science on why we actually have blood types. This is followed by a graphic from Wikimedia that illustrates the carbohydrate attachments that are actually responsible for the different ABO blood types. Finally, a graphic on the levels that the Japanese have taken all of this blood type to (if you don't already know, the fact that the Japanese are involved should give you a hint...)

Why do we have blood types?

More than a century after their discovery, we still don’t really know what blood types are for. Do they really matter? Carl Zimmer investigates.

When my parents informed me that my blood type was A+, I felt a strange sense of pride. If A+ was the top grade in school, then surely A+ was also the most excellent of blood types – a biological mark of distinction.

It didn’t take long for me to recognise just how silly that feeling was and tamp it down. But I didn’t learn much more about what it really meant to have type A+ blood. By the time I was an adult, all I really knew was that if I should end up in a hospital in need of blood, the doctors there would need to make sure they transfused me with a suitable type.

And yet there remained some nagging questions. Why do 40 per cent of Caucasians have type A blood, while only 27 per cent of Asians do? Where do different blood types come from, and what do they do?;To get some answers, I went to the experts – to haematologists, geneticists, evolutionary biologists, virologists and nutrition scientists.

In 1900 the Austrian physician Karl Landsteiner first discovered blood types, winning the Nobel Prize in Physiology or Medicine for his research in 1930. Since then scientists have developed ever more powerful tools for probing the biology of blood types. They’ve found some intriguing clues about them – tracing their deep ancestry, for example, and detecting influences of blood types on our health. And yet I found that in many ways blood types remain strangely mysterious. Scientists have yet to come up with a good explanation for their very existence.

“Isn’t it amazing?” says Ajit Varki, a biologist at the University of California, San Diego. “Almost a hundred years after the Nobel Prize was awarded for this discovery, we still don’t know exactly what they’re for.”;

My knowledge that I’m type A comes to me thanks to one of the greatest discoveries in the history of medicine. Because doctors are aware of blood types, they can save lives by transfusing blood into patients. But for most of history, the notion of putting blood from one person into another was a feverish dream.

Renaissance doctors mused about what would happen if they put blood into the veins of their patients. Some thought that it could be a treatment for all manner of ailments, even insanity. Finally, in the 1600s, a few doctors tested out the idea, with disastrous results. A French doctor injected calf’s blood into a madman, who promptly started to sweat and vomit and produce urine the colour of chimney soot. After another transfusion the man died.

Such calamities gave transfusions a bad reputation for 150 years. Even in the 19th century only a few doctors dared try out the procedure. One of them was a British physician named James Blundell. Like other physicians of his day, he watched many of his female patients die from bleeding during childbirth. After the death of one patient in 1817, he found he couldn’t resign himself to the way things were.

“I could not forbear considering, that the patient might very probably have been saved by transfusion,” he later wrote.

Blundell became convinced that the earlier disasters with blood transfusions had come about thanks to one fundamental error: transfusing “the blood of the brute”, as he put it. Doctors shouldn’t transfer blood between species, he concluded, because “the different kinds of blood differ very importantly from each other”.

Human patients should only get human blood, Blundell decided. But no one had ever tried to perform such a transfusion. Blundell set about doing so by designing a system of funnels and syringes and tubes that could channel blood from a donor to an ailing patient. After testing the apparatus out on dogs, Blundell was summoned to the bed of a man who was bleeding to death. “Transfusion alone could give him a chance of life,” he wrote.

Several donors provided Blundell with 14 ounces of blood, which he injected into the man’s arm. After the procedure the patient told Blundell that he felt better – “less fainty” – but two days later he died.

Still, the experience convinced Blundell that blood transfusion would be a huge benefit to mankind, and he continued to pour blood into desperate patients in the following years. All told, he performed ten blood transfusions. Only four patients survived.

While some other doctors experimented with blood transfusion as well, their success rates were also dismal. Various approaches were tried, including attempts in the 1870s to use milk in transfusions (which were, unsurprisingly, fruitless and dangerous).

Blundell was correct in believing that humans should only get human blood. But he didn’t know another crucial fact about blood: that humans should only get blood from certain other humans. It’s likely that Blundell’s ignorance of this simple fact led to the death of some of his patients. What makes those deaths all the more tragic is that the discovery of blood types, a few decades later, was the result of a fairly simple procedure.

The first clues as to why the transfusions of the early 19th century had failed were clumps of blood. When scientists in the late 1800s mixed blood from different people in test tubes, they noticed that sometimes the red blood cells stuck together. But because the blood generally came from sick patients, scientists dismissed the clumping as some sort of pathology not worth investigating. Nobody bothered to see if the blood of healthy people clumped, until Karl Landsteiner wondered what would happen. Immediately, he could see that mixtures of healthy blood sometimes clumped too.

Landsteiner set out to map the clumping pattern, collecting blood from members of his lab, including himself. He separated each sample into red blood cells and plasma, and then he combined plasma from one person with cells from another.

Landsteiner found that the clumping occurred only if he mixed certain people’s blood together. By working through all the combinations, he sorted his subjects into three groups. He gave them the entirely arbitrary names of A, B and C. (Later on C was renamed O, and a few years later other researchers discovered the AB group. By the middle of the 20th century the American researcher Philip Levine had discovered another way to categorise blood, based on whether it had the Rh blood factor. A plus or minus sign at the end of Landsteiner’s letters indicates whether a person has the factor or not.)

When Landsteiner mixed the blood from different people together, he discovered it followed certain rules. If he mixed the plasma from group A with red blood cells from someone else in group A, the plasma and cells remained a liquid. The same rule applied to the plasma and red blood cells from group B. But if Landsteiner mixed plasma from group A with red blood cells from B, the cells clumped (and vice versa).

The blood from people in group O was different. When Landsteiner mixed either A or B red blood cells with O plasma, the cells clumped. But he could add A or B plasma to O red blood cells without any clumping.

It’s this clumping that makes blood transfusions so potentially dangerous. If a doctor accidentally injected type B blood into my arm, my body would become loaded with tiny clots. They would disrupt my circulation and cause me to start bleeding massively, struggle for breath and potentially die. But if I received either type A or type O blood, I would be fine.

Landsteiner didn’t know what precisely distinguished one blood type from another. Later generations of scientists discovered that the red blood cells in each type are decorated with different molecules on their surface. In my type A blood, for example, the cells build these molecules in two stages, like two floors of a house. The first floor is called an H antigen. On top of the first floor the cells build a second, called the A antigen.

People with type B blood, on the other hand, build the second floor of the house in a different shape. And people with type O build a single-storey ranch house: they only build the H antigen and go no further.

Each person’s immune system becomes familiar with his or her own blood type. If people receive a transfusion of the wrong type of blood, however, their immune system responds with a furious attack, as if the blood were an invader. The exception to this rule is type O blood. It only has H antigens, which are present in the other blood types too. To a person with type A or type B, it seems familiar. That familiarity makes people with type O blood universal donors, and their blood especially valuable to blood centres.

Landsteiner reported his experiment in a short, terse paper in 1900. “It might be mentioned that the reported observations may assist in the explanation of various consequences of therapeutic blood transfusions,” he concluded with exquisite understatement. Landsteiner’s discovery opened the way to safe, large-scale blood transfusions, and even today blood banks use his basic method of clumping blood cells as a quick, reliable test for blood types.

But as Landsteiner answered an old question, he raised new ones. What, if anything, were blood types for? Why should red blood cells bother with building their molecular houses? And why do people have different houses?

Solid scientific answers to these questions have been hard to come by. And in the meantime, some unscientific explanations have gained huge popularity. “It’s just been ridiculous,” sighs Connie Westhoff, the Director of Immunohematology, Genomics, and Rare Blood at the New York Blood Center.;

In 1996 a naturopath named Peter D’Adamo published a book called Eat Right 4 Your Type. D’Adamo argued that we must eat according to our blood type, in order to harmonise with our evolutionary heritage.

Blood types, he claimed, “appear to have arrived at critical junctures of human development.” According to D’Adamo, type O blood arose in our hunter-gatherer ancestors in Africa, type A at the dawn of agriculture, and type B developed between 10,000 and 15,000 years ago in the Himalayan highlands. Type AB, he argued, is a modern blending of A and B.

From these suppositions D’Adamo then claimed that our blood type determines what food we should eat. With my agriculture-based type A blood, for example, I should be a vegetarian. People with the ancient hunter type O should have a meat-rich diet and avoid grains and dairy. According to the book, foods that aren’t suited to our blood type contain antigens that can cause all sorts of illness. D’Adamo recommended his diet as a way to reduce infections, lose weight, fight cancer and diabetes, and slow the ageing process.

D’Adamo’s book has sold 7 million copies and has been translated into 60 languages. It’s been followed by a string of other blood type diet books; D’Adamo also sells a line of blood-type-tailored diet supplements on his website. As a result, doctors often get asked by their patients if blood type diets actually work.

The best way to answer that question is to run an experiment. In Eat Right 4 Your Type D’Adamo wrote that he was in the eighth year of a decade-long trial of blood type diets on women with cancer. Eighteen years later, however, the data from this trial have not yet been published.

Recently, researchers at the Red Cross in Belgium decided to see if there was any other evidence in the diet’s favour. They hunted through the scientific literature for experiments that measured the benefits of diets based on blood types. Although they examined over 1,000 studies, their efforts were futile. “There is no direct evidence supporting the health effects of the ABO blood type diet,” says Emmy De Buck of the Belgian Red Cross-Flanders.

After De Buck and her colleagues published their review in the American Journal of Clinical Nutrition, D’Adamo responded on his blog. In spite of the lack of published evidence supporting his Blood Type Diet, he claimed that the science behind it is right. “There is good science behind the blood type diets, just like there was good science behind Einstein’s mathmatical [sic] calculations that led to the Theory of Relativity,” he wrote.

Comparisons to Einstein notwithstanding, the scientists who actually do research on blood types categorically reject such a claim. “The promotion of these diets is wrong,” a group of researchers flatly declared in Transfusion Medicine Reviews.

Nevertheless, some people who follow the Blood Type Diet see positive results. According to Ahmed El-Sohemy, a nutritional scientist at the University of Toronto, that’s no reason to think that blood types have anything to do with the diet’s success.

El-Sohemy is an expert in the emerging field of nutrigenomics. He and his colleagues have brought together 1,500 volunteers to study, tracking the foods they eat and their health. They are analysing the DNA of their subjects to see how their genes may influence how food affects them. Two people may respond very differently to the same diet based on their genes.

“Almost every time I give talks about this, someone at the end asks me, ‘Oh, is this like the Blood Type Diet?’” says El-Sohemy. As a scientist, he found Eat Right 4 Your Type lacking. “None of the stuff in the book is backed by science,” he says. But El-Sohemy realised that since he knew the blood types of his 1,500 volunteers, he could see if the Blood Type Diet actually did people any good.

El-Sohemy and his colleagues divided up their subjects by their diets. Some ate the meat-based diets D’Adamo recommended for type O, some ate a mostly vegetarian diet as recommended for type A, and so on. The scientists gave each person in the study a score for how well they adhered to each blood type diet.

The researchers did find, in fact, that some of the diets could do people some good. People who stuck to the type A diet, for example, had lower body mass index scores, smaller waists and lower blood pressure. People on the type O diet had lower triglycerides. The type B diet – rich in dairy products – provided no benefits.

“The catch,” says El-Sohemy, “is that it has nothing to do with people’s blood type.” In other words, if you have type O blood, you can still benefit from a so-called type A diet just as much as someone with type A blood – probably because the benefits of a mostly vegetarian diet can be enjoyed by anyone. Anyone on a type O diet cuts out lots of carbohydrates, with the attending benefits of this being available to virtually everyone. Likewise, a diet rich in dairy products isn’t healthy for anyone – no matter their blood type.

One of the appeals of the Blood Type Diet is its story of the origins of how we got our different blood types. But that story bears little resemblance to the evidence that scientists have gathered about their evolution.

After Landsteiner’s discovery of human blood types in 1900, other scientists wondered if the blood of other animals came in different types too. It turned out that some primate species had blood that mixed nicely with certain human blood types. But for a long time it was hard to know what to make of the findings. The fact that a monkey’s blood doesn’t clump with my type A blood doesn’t necessarily mean that the monkey inherited the same type A gene that I carry from a common ancestor we share. Type A blood might have evolved more than once.

The uncertainty slowly began to dissolve, starting in the 1990s with scientists deciphering the molecular biology of blood types. They found that a single gene, called ABO, is responsible for building the second floor of the blood type house. The A version of the gene differs by a few key mutations from B. People with type O blood have mutations in the ABO gene that prevent them from making the enzyme that builds either the A or B antigen.

Scientists could then begin comparing the ABO gene from humans to other species. Laure Ségurel and her colleagues at the National Center for Scientific Research in Paris have led the most ambitious survey of ABO genes in primates to date. And they’ve found that our blood types are profoundly old. Gibbons and humans both have variants for both A and B blood types, and those variants come from a common ancestor that lived 20 million years ago.

Our blood types might be even older, but it’s hard to know how old. Scientists have yet to analyse the genes of all primates, so they can’t see how widespread our own versions are among other species. But the evidence that scientists have gathered so far already reveals a turbulent history to blood types. In some lineages mutations have shut down one blood type or another. Chimpanzees, our closest living relatives, have only type A and type O blood. Gorillas, on the other hand, have only B. In some cases mutations have altered the ABO gene, turning type A blood into type B. And even in humans, scientists are finding, mutations have repeatedly arisen that prevent the ABO protein from building a second storey on the blood type house. These mutations have turned blood types from A or B to O. “There are hundreds of ways of being type O,” says Westhoff.

Being type A is not a legacy of my proto-farmer ancestors, in other words. It’s a legacy of my monkey-like ancestors. Surely, if my blood type has endured for millions of years, it must be providing me with some obvious biological benefit. Otherwise, why do my blood cells bother building such complicated molecular structures?

Yet scientists have struggled to identify what benefit the ABO gene provides. “There is no good and definite explanation for ABO,” says Antoine Blancher of the University of Toulouse, “although many answers have been given.”

The most striking demonstration of our ignorance about the benefit of blood types came to light in Bombay in 1952. Doctors discovered that a handful of patients had no ABO blood type at all – not A, not B, not AB, not O. If A and B are two-storey buildings, and O is a one-storey ranch house, then these Bombay patients had only an empty lot.

Since its discovery this condition – called the Bombay phenotype – has turned up in other people, although it remains exceedingly rare. And as far as scientists can tell, there’s no harm that comes from it. The only known medical risk it presents comes when it’s time for a blood transfusion. Those with the Bombay phenotype can only accept blood from other people with the same condition. Even blood type O, supposedly the universal blood type, can kill them.

The Bombay phenotype proves that there’s no immediate life-or-death advantage to having ABO blood types. Some scientists think that the explanation for blood types may lie in their variation. That’s because different blood types may protect us from different diseases.

Doctors first began to notice a link between blood types and different diseases in the middle of the 20th century, and the list has continued to grow. “There are still many associations being found between blood groups and infections, cancers and a range of diseases,” Pamela Greenwell of the University of Westminster tells me.

From Greenwell I learn to my displeasure that blood type A puts me at a higher risk of several types of cancer, such as some forms of pancreatic cancer and leukaemia. I’m also more prone to smallpox infections, heart disease and severe malaria. On the other hand, people with other blood types have to face increased risks of other disorders. People with type O, for example, are more likely to get ulcers and ruptured Achilles tendons.

These links between blood types and diseases have a mysterious arbitrariness about them, and scientists have only begun to work out the reasons behind some of them. For example, Kevin Kain of the University of Toronto and his colleagues have been investigating why people with type O are better protected against severe malaria than people with other blood types. His studies indicate that immune cells have an easier job of recognising infected blood cells if they’re type O rather than other blood types.

More puzzling are the links between blood types and diseases that have nothing to do with the blood. Take norovirus. This nasty pathogen is the bane of cruise ships, as it can rage through hundreds of passengers, causing violent vomiting and diarrhoea. It does so by invading cells lining the intestines, leaving blood cells untouched. Nevertheless, people’s blood type influences the risk that they will be infected by a particular strain of norovirus.

The solution to this particular mystery can be found in the fact that blood cells are not the only cells to produce blood type antigens. They are also produced by cells in blood vessel walls, the airway, skin and hair. Many people even secrete blood type antigens in their saliva. Noroviruses make us sick by grabbing onto the blood type antigens produced by cells in the gut.

Yet a norovirus can only grab firmly onto a cell if its proteins fit snugly onto the cell’s blood type antigen. So it’s possible that each strain of norovirus has proteins that are adapted to attach tightly to certain blood type antigens, but not others. That would explain why our blood type can influence which norovirus strains can make us sick.

It may also be a clue as to why a variety of blood types have endured for millions of years. Our primate ancestors were locked in a never-ending cage match with countless pathogens, including viruses, bacteria and other enemies. Some of those pathogens may have adapted to exploit different kinds of blood type antigens. The pathogens that were best suited to the most common blood type would have fared best, because they had the most hosts to infect. But, gradually, they may have destroyed that advantage by killing off their hosts. Meanwhile, primates with rarer blood types would have thrived, thanks to their protection against some of their enemies.

As I contemplate this possibility, my type A blood remains as puzzling to me as when I was a boy. But it’s a deeper state of puzzlement that brings me some pleasure. I realise that the reason for my blood type may, ultimately, have nothing to do with blood at all.

This article was originally published on Mosaic. Read the original article.

 

All red blood cells have a specific chain of carbohydrates, and then the presence or absence of specific carbohydrates at the distal terminal of the chain determines the blood type. There are 4 phenotypes in blood types, but that is determined by 6 genotypes. In other words, your mother gave you one set of genes, your father gave you another, and these combine to form your genotype which is expressed as a specific physical expression known as a phenotype. (If you've read the article, most of this should be review).

So, you can be AA, AO, BB, BO, AB, or OO: these are the 6 genotypes. AA and AO would both be expressed as type A blood, just as BB and BO would both be expressed as type B blood, which is how 6 genotypes distill into 4 phenotypes. But how does this tie back to the carbohydrates? Well, all cells have tags on them that identify them as you. These tags are proteins embedded in the lipid membranes of your cell with carbohydrates attached at the end. In the case of blood types, as I mentioned, all four blood types start with the same chain, varying only at the distal terminal. Add a galactose sugar on to your chain, and you have a B-type tag. Take that same galactose, swap out an amino for one of the carbons and attach an acetyl group and you get N-acetylgalactosamine - and now that's an A-type tag. Omit either sugar - galactose or the souped up version - and you have the O-type tag. Now, on your cells, you're going to have tags show up in pairs: you can have A-type tags, B-type tags, or O-type tags. If you have all O-type tags, you have type O blood. If you have any A-type tags but NO B-type tags, you've got type A, whether that's all type A (AA) or only half type A (AO). If you've got B but not A, you've got B blood (BB, or BO). It's when you get cells that have both the type A tag (N-acetylgalactosamine) AND the type B tag (galactose) that you get type AB blood. That's what this image is showing. I wanted to be sure to illustrate the biochemistry of this clearly.

And on to the Japanese hijinks...

Japanese Blood Typing, courtesy of Business Insider

Water Wednesdays

The surface of the planet we live on is approximately 71% liquid water, and the average adult is about 65% water. The understanding of life centers on liquid water: without water, we assume there is no life. It is the most common solvent in labs, and when it is contaminated, it is the cause of so many deaths that it sets nations back by centuries. I am passionate about water safety and ending the world water crisis, so every Wednesday will be Water Wednesday. Look for articles, infographics, or links to water-related issues. These may be like today's infographic on water in labs, it may be a chemistry lesson on water, it may be articles about specific water-born illnesses or pathogens that are, for some portion of their lifespan, dependent upon water to mature, infect, or breed. It may be about water purity, or water safety, it may be cautionary, informational, or even, occasionally, just fun. But Wednesdays will be dedicated to the liquid that brings us life.

The Law Of Unintended Consequences

So yesterday’s post introduced the law of unintended consequences. Hopefully, this wasn’t the first time you’d heard of this idea, but just in case it was, let’s define what is meant: The law of unintended consequences is that every action has consequences. Imagine this as ripples in a pond when a rock is dropped in: rock creates the first splash, but that splash then ripples outward and the ripples can have impacts themselves, separate from the initial impact of the rock into the water. It’s worth noting that unintended consequences may not be unwanted consequences; sometimes, the ripple effect turns out to be exactly what you want. However, if you experimental design goes wrong, you can probably guarantee that the cause was one of these ripple effects gone wrong. Let’s start by looking at a simple example: you’re sitting in a boat on a lake. You’ve got a full glass of an icy cold beverage. You pluck out one of the ice cubes and decide to toss it into the lake. It makes a nice splash, and you see it ripple outwards. You decide to try this again, but with a bigger chunk of ice. You set your very full glass of icy cold beverage down and reach into the cooler, fishing out an enormous chunk of unbroken ice. You toss this overboard with a decent amount of force. You get the same satisfying splash, but this time, the ripples are substantial enough that they rock your boat. This is enough to spill your icy cold beverage. As you sit back down, your pants get wet in the spilled icy beverage. Jumping up in alarm, you rock the boat further, and as this is actually just a little boat, it’s enough to overturn the boat. You’ve now fallen into the water, along with your cooler, your icy beverage, and everything else you had on the boat. Fortunately, you were in relatively shallow water, so you’re able to get to your feet and keep your head above water. You can right your boat and then begin to find all the things you spilled, putting them back into the boat, but you really don’t know if you can get back into the boat yourself without overturning it again.

Tossing the ice and seeing the ripples are the intended consequences. Everything else is unintended consequences, from the spilled drink, to the wet pants, the upended boat, the spilled contents. In an experiment, this long list of extra effects from ripples is an indication of the things that can go wrong unless you think things through in advance.

So, what might this look like in your lab? Let’s imagine you’re looking for the minimum bactericidal concentration of an antibiotic against a strain of E. coli that you suspect may have developed resistance to multiple antibiotics. You want to be certain that the drug of choice will be effective against the bacteria, so doing the MBC test correctly is important.

I haven’t discussed the how-to on doing an MBC before; I’ll be doing an entry on that later. For the moment, we’ll assume you know how: you culture your sample to ensure you have enough to do your test, do an isolation plate to make sure you test only the bacteria of interest, reculture that bacteria, do a count in the microscope to determine the concentration, then inoculate a series of broths made with the antibiotic of interest. Incubate for 24 hours, take the tubes without growth, and inoculate plates without any antibiotics. Whichever plate produces no growth from a broth with the lowest possible dose of the antibiotic determines the minimum bactericidal concentration. I go over these details (there are more) to give you a quick overview of the places where you can have unintended consequences from your decisions.

What if you decided to use tap water when you made up your culture broth? It’s possible that autoclaving the broth before its cultured would negate any unintended consequences of using water that wasn’t sterile, but some bacteria thrive on saline environments while others are inhibited by too much salinity. The use of deionized water would control for any variation in salinity that even autoclaving could not remove. That would help limit the impact on your bacteria’s growth either through enhancing the growth or inhibiting it.

But you used tap water, so the water has a little too much chlorine in it, compared to what would be in DI water. You don’t think this will be a big deal: gram negative bacteria are classically grown on MacConkey’s Agar, which has bile salts and NaCl to inhibit gram positive growth, so increased salinity shouldn’t be an issue. That is, unless the chlorine content is above even what the bacteria is able to tolerate compared to the MacConkey’s agar, leading to a reduced growth during the initial growth and culture, before you even attempt the MBC. That means before you’ve started testing the antibiotic, you’ve created a hostile environment that kills your bacteria - and invalidated your results.

You thought of that, which is why you used DI water instead, and made sure to sterilize everything in the autoclave before you started your test. In fact, everything went perfectly. You got the results you needed: this specific strain of E. coli is resistant to cephalosporins, but not to the quinolones. You pass this information on. What you don’t know is that the patient can’t safely take quinolones - the best drug to treat this patient’s infection isn’t safe for the patient. The unintended consequence here is that the doctor must decide to either treat with a less effective drug, or risk treatment with a drug the patient won’t react well to.

This is the challenge of rising antibiotic resistance, the challenge faced by researchers in the study yesterday. Even when science is done right, what works in the lab may not work in the clinic. This is the law of unintended consequences. The best way to try to prevent them is to always do more research. There’s no guarantee that research will find everything you need, but not doing the book work (or journal work, or internet work) will leave you hurting more often than it won’t.

I mentioned that sometimes the unintended consequences can be beneficial: aspirin is a classical example of this. When acetylsalicylic acid was first derived as an alternative to the salicylic acid from white willow bark (which caused digestive issues when used, another example of unintended consequences) it was used to treat pain. It was later found to serve as an anticoagulant as well, and has since gained widespread acceptance in treatment of heart attacks or strokes. This was a positive unintended consequence.

This isn’t a new idea: in 1936, Robert K. Merton listed possible causes of unintended consequences. See if you can identify which causes may be at play in our above scenarios.

  1. Ignorance, or the inability to anticipate every possible outcome.
  2. Errors of analysis or resulting from following habits that worked in past situations but do not necessarily apply to the current one.
  3. Immediate interests overriding long term interests
  4. Basic values that may prohibit certain actions over others (even if the resulting long term consequences could be unfavorable).
  5. Self-defeating prophecies, or the drive to solve problems before they occur (possibly preventing such problems).

By being aware of the possible causes, you may be able to prevent mistakes yourself in the future. Research helps prevent ignorance related errors, along with errors of analysis. Being certain you understand the risks, benefits, and your own moral and ethical compass will help limit the consequences from immediate interests or basic value conflicts. Finally, remember that all scientists are human, and learn from your mistakes. Like the overturned boat, you gather yourself, pick up the pieces, and move forward.

Fruit Flies In Space...

Today's a twofer! Today's LinkedIn video reminded me of the paper I presented for my pathology class (which was more like a journal club). Both deal with the study of fruit flies (Drosophilia sp.) in space. The video focuses on the role of the flies in studies on the ISS in space, while the paper focuses on a specific study done in 2006 and the probable importance of that study. Specifically, the paper focused on the way that changes to gravity during the development of Drosophilia sp. resulted in altered immune pathways, leading to specific weaknesses in those flies raised in a microgravity environment. This finding in a model species (one that can be used to illustrate how humans work, but on a simpler scale), may help explain why many astronauts are more prone to illness upon return from space and may also help provide clues for how to counter the impact of gravity on immunity in the future.

 

Common drug linked to longevity in mice

Metformin, used worldwide to treat Type 2 diabetes (in which the body does not appropriately respond to insulin in the bloodstream, leading to excessive amounts of insulin in the blood), may be one of the most prescribed drugs for this condition. This drug doesn't just treat the insulin; unlike most drugs, this one also helps prevent many of the cardiovascular problems associated with the hormonal imbalances of diabetes. It is also commonly used to treat polycystic ovarian disorder and metabolic syndrome, two disorders still not completely understood. A new study released in Nature Communication this month revealed that when fed in very low doses to middle aged mice, metformin actually did more. This drug seemed to mimic the effects of a low-calorie diet; namely, it increased the lifespan of the mice in the experimental group. However, in higher doses, metformin did enough kidney damage to significantly shorten lifespans - in fact, it shortened lives by more than the lower dose lengthened them.

It's still an interesting result, and it's worth remember that studying animal models allow us to discover which paths are worth pursuing in human models, with all the risks that entails, in a safer, faster, more humane manner.

Bias shows up in really strange places...

In the history of the Nobel Prize, a total of 4 people have won the prize more than once. Linus Pauling is one of those four people. He won the award in Chemistry in 1954, and then again for Peace in 1962. This makes him one of only two people to have won the award in two unrelated fields (Marie Curie being the other). He's also the only person not to have shared his prizes - these prizes were not shared with other scientists (as with the discovery of the structure of DNA, credited to Watson & Crick) or humanitarians. Pauling's work is credited with having helped found both the fields of quantum chemistry (think about our understanding of the atom, and how that shapes our comprehension of how chemistry functions - for instance, that bonds are about the sharing or exchange of electrons) and molecular biology, or the application of chemistry to the biological sciences and the exploration of biology on the molecular level - including his own search for the structure of DNA. In fact, it was Pauling who discovered the secondary structure of proteins - alpha helices and beta pleated sheets.

Pauling's genius in the fields of chemistry and biochemistry in the beginning of his career makes his approach to dietary supplements, starting with vitamin C, even more surprising. In this article from The Atlantic, Dr. Paul Offit discussed Pauling's views on vitamins, minerals, and other supplements. The problem, it turns out, isn't that there aren't any studies examining the impact of these supplements. No - given Pauling's faith in them, multiple studies were done - the support for the claims being made simply didn't exist. No matter how much Linus Pauling wanted extra-dietary vitamins to be panaceas, the evidence from studies repeatedly demonstrated that not only were these supplements not helpful, some can actually be harmful.

The distinction between extra-dietary supplements and what is consumed in food is apparently important. The problems seen from consuming supplements as pills, tablets, or other dosage forms can be avoided if the nutrients instead come from dietary sources. In other words, instead of taking an iron supplement, consuming red meat, green vegetables, and nuts is safer.

All of this just demonstrates that it is important to rely on repeatable, measurable evidence rather than our gut instincts when exploring science. Even geniuses can be lead astray.

Talk about the blues...

One of the greatest mysteries in life is death. In an attempt to understand what occurs in the last moments of life, scientists study the lives and deaths of animals, looking for clues that might be applicable to larger organisms. One such study, in London, examined worms. This article, from The Conversation & Ars Technica, discusses the finding: a blue light signaled the death of the worm. While in some cases, delaying the blue light could also delay death, it appeared it didn’t work in all cases. More details can be found at the link.

 

Watch for more on this to come...

My husband shared this TEDMed talk with me tonight, and I wanted to share it with you, along with Dr. Attia's blog. It takes courage for doctors to deviate from traditional wisdom or the accepted dogma and consider new approaches to existing problems. However, I've found that, in my own personal experience, the evidence he seems to be describing is accurate: I've found that eating more whole grains and fewer refined carbohydrates is far better for me and my diet than any low-fat approach ever attempted (low-fat seems to always actually result in higher cholesterol for me, meaning that my body compensates for the lower dietary intake by raising the production, indicating an internal imbalance that diet alone can't correct). I'll be following Dr. Attia, and I hope to have more information for you in the months and years to come.

A new approach to drug development...

The NIH recently announced a pilot program to help speed up drug development. Most drug development focuses on finding new molecules or compounds to treat existing disorders. The new program is looking for researchers to find uses for existing compounds, hopefully reducing both the cost and time involved in developing new drugs.

Using Viruses to Fight Bacteria?

In Tbilisi, Georgia (the country, not the state), Dr. Revaz Adamia is trying something different in the war against bacteria: instead of using antimicrobial drugs, he's treating infections with a special class of viruses instead. Why use viruses? The class in use, bacteriophages, target only bacteria, not the human infected. As a result, the virus infects and kills the infection that was making the patient sick. When the bacterial infection is gone, the virus, now without a host, dies off.

This solution is an alternative to the increasing problem of antimicrobial resistance. Many bacteria are increasingly resistant to the drugs used to treat patients infected with them. The most well-known case of resistance is MRSA, or Methicillan-resistant Staphylococcus aureus, a bacteria that frequently causes skin and respiratory infections or food poisoning. Resistant bacteria no longer respond to the drugs once used to treat infections, making treatment of patients increasingly difficult.

Scots Scientists To Trial Synthetic Human Blood

Yesterday, I drove to my local donor center, went through the short screening that determines if I’m healthy enough to donate, and then gave blood. My blood type, O-, is known as the “universal donor”, meaning it can be safely donated to anyone in an emergency, without having to check the recipient’s blood type first. (I, on the other hand, can only safely receive O- blood, so while my blood can be safely donated to everyone, my body is very picky about what it will accept.)

Because I’m a universal donor, when the waiting period between donations is up (it takes 56 days for red blood cells to replenish themselves), I often get a phone call or email encouraging me to come in. I rarely need it - I make sure, when I leave, that I note when I’ll be eligible again, set a reminder in my calendar, and then work it into my schedule. But I still get the reminders - less than half of the population can safely donate, and not enough of us do.

That lack of volunteer donors creates a problem: the demand for blood often exceeds the supply. Science has been seeking an answer to this problem for decades, including producing synthetic blood products. However, nothing has replaced human blood... until now.

Actually, that’s not even really fair to say. Scientists in Scotland have gotten permission to pursue human trials of a synthetic blood product, but this product still has a human source. Grown from human stem cells, it takes a source of immature, not-yet-differentiated blood cells, clones them, and then mass produces blood from this stem cell line. Nor are these the controversial embryonic stem cells - these come from adult donors.

So this synthetic blood has an entirely human source, but is then mass produced outside of a human body. Before it can be widely accepted in hospitals around the world, it must go through rigorous testing, and this is the step being carried out now in Scotland. The first human trials have been approved. This means that healthy men and women will be given the new blood product and monitored. As long as there are no adverse reactions, testing will continue.

I’m certain there will always be a need for donors like me. But the fact that we may have a viable alternative means that maybe, someday, people need never die from a lack of safe blood again.

New Human Body Part

Science is forever growing as we continue to search and explore. In an article published in Ophthamology, a new body part was described that will require textbooks to be re-written and that is already changing the understanding of certain disorders. Thanks to electron microscopy and donated corneas, a new layer in the human cornea, part of the eye, was discovered. Named for it's discoverer, Dua's layer is the final layer of the cornea and was found after each layer of cells in the cornea were separated by puffs of air and scanned individually.

Electron microscopy fires electrons over the surface of or through a sample, able to discern images as small as 50 pm. Most light microscopes, by contrast, are limited to images no smaller than 200 nm, or 200,000 pm.

Another article on today's Supreme Court ruling...

This time from The Scientist. Seriously, this is only good news. Thanks to Paula A. for the link to this article! (I may interview her for a future article - she is part of my inspiration to become an immunologist, because she's the first one I've ever known!).

US Supreme Court Says Human Genes Can't Be Patented

In a demonstration of just how deeply entrenched science and medicine are in our everyday lives, an article in the Wall Street Journal today announced an important decision from the US Supreme Court: Human Genes cannot be patented. This has been hotly contested: those arguing for patent have argued that the research and development done with the genes is costly, and without the protection of patents, it is likely to go unfunded. Those arguing against patent have pointed out the flaw of patenting a gene carried by millions of people (or even just a few), and worse, the trouble that is caused when a carrier of a gene seeks treatment for their condition, only to find out their own genetic code is locked under patent protection.

I, personally, am an advocate of openness and freedom. I believe that keeping medical research like this locked under patent is absurd, and often hinders advancements in treatment. I will note, however, that I am not currently employed by any researchers, and thus I am not bound by any such privacy agreements myself. I can understand if a scientist's work and livelihood is dependent on funding and thus on signing privacy agreements. I may find them absurd, but at the end of the day, pragmatism still has its place.

Still, I think this was a victory for the open exchange of ideas. What do you think? Will this be a boon to medicine? Should it have ever been in question?

[polldaddy poll=7175179]

A new rapid diagnostic test...

There are illnesses that we take in stride (the common cold), and then there are the ones that scare us at a visceral level (Ebola). Sometimes, that fear is justified - some bacterial pathogens are both amazingly virulent and stunningly endurant (tetanus). Other pathogens are commonly misidentified at first, but frightening lethal in a very short time (Ebola again). However, sometimes the larger public panic about disease is a holdover from an older time. Leprosy, or Hansen's Disease, is mix of both. Misunderstood by most people, and nowhere near as contagious as feared, leprosy is certainly not the threat it is often portrayed to be. However, as another disorder that is often misdiagnosed and with terrible consequences, it certainly is terrible for the 200,000 people diagnosed each year and those living with it already. In this article from the BBC, a new rapid diagnostic test was discussed. This could allow for rapid and accurate detection of infection with Mycobacterium leprae, the bacteria which causes Hansen's disease. This could lead to earlier treatment with the cocktail of antimicrobial drugs needed to treat the disease. The sooner patients can be started on appropriate antimicrobials, the better their prognosis is: they are less likely to suffer the nerve damage that leads to tissue necrosis and disfigurement.

Gladstone scientists map process by which brain cells form long-term memories

Gladstone scientists map process by which brain cells form long-term memories. Great discoveries are not made all at once, but rather in little steps, one at a time. One such little step seems to have uncovered the pathway by which learning goes from temporary into long-term storage, thus establishing new pathways in the brain long into adulthood.