About This Blog

This blog was created with the intent of spreading informative, and hopefully at least somewhat interesting, information (I realize it is redundant to say informative information, but I couldn't think of another way to phrase it) about epidemiology and related subjects. Have a look and hopefully you learn something new and interesting!

Wednesday, March 2, 2011

Elephants on LSD, Wilford Brimley, and gas masks. Let's talk about studies.

The title might be a little misleading, but all those are a part of this article in some capacity. Studies are an important part of epidemiology, as you probably have noticed with all the articles based on studies that I have posted so far. Epidemiological studies test whether an exposure has an association with the development of disease (Gordis, 2009). After this, epidemiologists attempt to find whether or not the association is also causal. I'll define/describe some of these words so that makes more sense.

Exposure -think food, sunlight, cigarettes (anything a person can be exposed to)
Disease - this includes actual diseases (lupus, cancer, etc.) and also many other outcomes (buying a car, lifting weights, etc.)*
Association - some kind of relationship exists between exposure and disease, but it's unclear what that relationship is
Causal - the exposure causes the disease

* I know that this is a little confusing but think of a study about whether smoking cigarettes makes a person lift more weights.

It is important to note the difference between association and causation. Just because a study finds an association between eating jelly beans and liver cancer does not mean that eating jelly beans will cause cancer (I just made that up by the way, so no need to be terrified of jelly beans). It is also important when reading an article to be critical of the findings, like whether they are saying an exposure is associated with a disease or if an exposure causes a disease. Big difference. News sites sometimes have a habit of leaving important details out or of making assumptions that the actual study may or may not support. Making bold headlines is a good way to attract viewers, but it is also a good way to send the wrong impression to readers. Take, for example, this article about a breastfeeding study and various headlines that were produced from it:

http://ksjtracker.mit.edu/2011/01/20/breastfeeding-study-a-case-study-in-bad-headlines/

She's just trying to stay sane.
(Corbis, 2010)
How about this one:
 http://theweek.com/article/index/203843/can-secondhand-smoke-make-you-crazy

Can secondhand smoke make you CRAZY?!?!?!? The picture to the left was attached to the article. I love that caption too, which I kept exactly as it appeared in the article. A little sensationalized in my opinion. That's not to say there isn't some validity to the study (notice the epidemiologist who led the study making sure to say that the results do not prove causality), which you can investigate yourself if you wish. What should you look for if you want to know more about the validity of a study? Well I will investigate an article and the associated study and show some things to keep an eye out for.

Keeping on the smoking trend (there is never a shortage of smoking studies and related articles) I chose this article to look at. First, lets take a look at that headline: Study: Quitting smoking raises diabetes risk. If you just read that and are a smoker you might think to yourself, "Well to heck with quitting if it will give me diabetes." If you were a little more critical you might read on and find that the study authors emphasize that quitting benefits far outweigh those of continuing to smoke and the authors associate the increased risk to weight gain often attributed to quitting smoking. Now you might have a different mindset about the topic. Just reading the article is just where you should start to investigate the study. Before you make a life-changing decision to cut out jelly beans (or whatever the exposure is in the study), you should also look at the original study.

Here is a link to the study that the article we are looking at discussing. So lets get some of the basics out of the way.

What is the exposure? = Quiting Smoking
What is the disease? = Type 2 diabetes mellitus
What is the hypothesis that is being tested? = Smoking cessation increases diabetes risk in the short term, possibly owing to cessation-related weight gain.
What kind of study is it? = Prospective cohort study
Who was studied? = 10,892 middle-aged adults who initially did not have diabetes

How do you find the answers to these questions? Well luckily in this study they pretty much just lay it out. Some studies may not and you might have to do some reading to figure it out from the article. The exposure, disease, and hypothesis should be fairly easy to pick out. The kind of study might be more difficult to find if it isn't listed. In a cohort study "the investigator selects a group of exposed individuals and a group of nonexposed individuals and follows up both  groups [note: can be more than two groups] to compare the incidence of disease" (Gordis, 2009, p. 167).  In this case the exposed individuals are smokers who quit and the nonexposed are continuing smokers and individuals who have never smoked. It is prospective because the people studied were identified at the beginning and then assessed later (17 years in this case) to see if they had developed the disease. This differs from a retrospective cohort study which chooses individuals based on whether they have a disease or not and then the researchers look back in time (using past records like surveys, census, etc.) to see whether or not each group being compared has the exposure being tested (Gordis, 2009). So prospective = future, and retrospective = past. Another common type of study is a cross-sectional study which determines exposure rates in the groups being compared at a point in time. So cross-sectional study = present. (Note: There are other types of studies. I just used cross-sectional studies as a comparison to cohort studies.) Prospective cohort studies are good because they allow researchers to follow groups of individuals that they have selected over a period and time and see if an outcome occurs. This is important because it allows researchers to (possibly) determine whether a temporal relationship exists between an exposure and a disease. A temporal relationship means that the exposure preceded the disease. If a cross-sectional study (remember, one point in time) is done the researchers cannot determine if a temporal relationship exists and decide which came first, the exposure or the disease. Temporal relationships are an important part of supporting causation between an exposure and a disease. Prospective studies do have some downsides though. Participants in the study could drop out (or die) at any time during the experiment. There is also a higher chance that more confounding factors are involved.

Okay, lets take a second and discuss confounding factors. Confounding factors are other exposures that could have an association with the disease and thus makes it difficult to ascertain whether the exposure studied or confounding factors led to the outcome. For example in this study, age, race, sex, education, adiposity (huh?), physical activity, lipid levels, blood pressure, and location were all found to be possible confounding factors. These researchers accounted for these particular confounding factors by adjusting their numerical results. So in theory, these factors are not affecting the final results. The problem is that not all confounding factors can be accounted for, but a study that adjusted for as much as possible is preferable to one that doesn't account for confounding factors. Just keep in mind when reviewing studies that these confounding factors are out there and could be affecting the end result.

Back to the study. What should we really be looking for to assess whether what they say is true or not? The most important thing we look for is a valid statistical association. In other words, do the statistics support the hypothesis and can we say that the results can be applied to whole populations (white females, teenagers, Chinese men, etc.). I know statistics are boring to a lot of people, but they are important. This post is already long, so I won't go into details about how to interpret statistical results, because that could get very detailed. But it is important to realize that the results could just be due to chance, confounding factors, or bias on the part of the researchers in selecting study individuals or what data to collect.  Some questions you might ask yourself when analyzing a study:

  • How did the researchers select the study participants?
  • How did the researchers measure the exposure and the disease?
  • Do the researchers discuss confounding factors and limitations of the study?
  • Can you see an apparent bias on the part of the researchers?
  • Who sponsored the study? (i.e. Are there conflicting interests?)
  • What assumptions do the researchers make?
  • Do the statistics support the researchers' conclusion? (Might need a background in statistics to really interpret this.)
  • Remember, be critical!
So after analyzing the the study, I came up with my own observations. I think that this was a well done study. They were very candid about  the different confounding factors and limitations of the study. They make sure to say that there is an association between quitting smoking and development of diabetes, but they cannot prove a causal relationship. I would have liked them to mention that they found that the weight gain many smokers experience after quitting was a major factor in the title, so it doesn't seem like the act of quitting was necessarily at fault in developing diabetes. Same criticism with the news article. However, I think the article and study were responsibly reported and didn't try to sway readers in any way. They both did a very good job of explaining the risk of weight gain and diabetes and ways to alleviate this.All in all, I think this was a well done study that raised awareness of the dangers of weight gain (specifically for those who have recently quit smoking) and its impact on diabetes.

Should you be alarmed if you are a smoker and want to quit? There are a million studies out there trumpeting the benefits of quitting smoking (or even better never starting), so I wouldn't take this as a sign to not stop. To really help yourself avoid an increased risk of diabetes after quitting, the best advice would be to eat healthy and watch your diet.

Well, that's my spiel. I'll finish this post with a little entertainment.

First, an article about some silly experiments you might find amusing. Bad science is out there, so be careful to be critical of what you hear.

And second, Wilford Brimley! (Note: Not laughing at the disease diabetes, I just find this amusing and it somewhat relates to the study I have been talking about.)

 

I hope this has been another informative and and interesting post. Feel free to leave comments or ask questions. I will do my best with my admittedly limited knowledge to provide an adequate answer.

Oh and lastly, here is a list of sources that I used in this post:

Gordis, L. (2009). Epidemiology (Fourth Edition ed.). Philadelphia: Saunders Elsevier.

Corbis (2010). She's just trying to stay sane [Photograph] Retrieved from http://theweek.com/article/index/203843/can-secondhand-smoke-make-you-crazy





2 comments:

  1. I thought this post was interesting. I’m studying epi too and came across an article headline that states: “Marijuana Use Causes Psychosis in Adolescents: BMJ”. Now, this I could understand a little more than just secondhand smoke, but why more in adolescents than say everyone that smoked marijuana? Here is the article if you would like to read it yourself: http://www.ibtimes.com/articles/118102/20110302/marijuana-causes-psychosis.htm, and then just today I read that the US has the highest bipolar rates, even higher than some of the poorest countries, where I would think manic depression would sky-rocket. Here is that article: http://www.ecanadanow.com/health/2011/03/10/study-finds-bipolar-rates-highest-in-the-us/. Now here’s what I want to know…for every cause there is an effect, I get that, and epidemiology is a science that has been helpful in so many ways, but mental health disorders are so hard to diagnose. If secondhand smoke can cause a mental disorder, than what is the possibility that that person was just going to breakdown eventually? When is the study going to be done showing that carbons omissions from a car causes schizophrenia? I know, I know, I’m just going off on a tangent here and I stress again, epidemiology is a much needed science that has been extremely helpful in many of life’s major health issues, but is it possible that some studies done cause the general public more worry than help? What is the real purpose of these studies? Are they for the greater good? Or, should people that are not epidemiologists stop writing articles because they don’t know the purpose of the study in the first place?

    ReplyDelete
  2. Good thoughts, and I think a lot of people would agree that some news articles cause more public worry than help. Notice I said news articles. I personally don't think that a study on any particular cause and effect (marijuana/ schizophrenia, location/bipolar rates, etc., etc.) should not be done because of how the general public will perceive it. The only way to gain knowledge on some of these subjects is to make hypotheses and study those hypotheses. It is up to the scientific community at large to decide whether the study has merits and whether the results are of concern to the health of the public. All these studies, even the ones that seem as misguided as the ones I touched on and that you have found, make some addition to our collective knowledge on these subjects. Do I think that secondhand smoke causes schizophrenia or that marijuana causes psychosis? Not with the research that I've seen, but at least these studies will get other researchers to build on these results (either in agreement or in disagreement) and to get closer to findings that show that cause and effect relationship if it exists. This quote from the article, "This piece of research is significant in that it addresses a key question: in adolescents, is the higher-than-average instances of psychosis with marijuana users a cause-and-effect relationship or simply one of correlation?" really shows that this is a preliminary study. There are so many other confounders, especially it seems with mental disorders, that it is difficult to establish causality. But the only way to do that is to continue researching. I'm starting to sound like I'm a research lobbyist or something, but I think studies are for the greater good, keeping in mind that everyone keeps an objective eye on the results. As for those who write these articles, it is hard to say who should and shouldn't be writing them. I think the public deserves to know what kind of research is going on in the world and their implications in an easily understood format. However, the key is to keep that objectivity. While I agree that epidemiologists could better understand the studies done and probably better analyze the material, would epidemiologists be better at keeping objectivity? Hard question to answer. Overall, I think it is the writers' responsibility to portray the correct and objective message the study warrants, and also for the reader to understand that not everything they read is set in stone.

    ReplyDelete