Serving the Honolulu Area

Contact Us

Evidence-Based Practice in Medicine Part 4: Trusting the Experts

While researching the issues surrounding this topic I came across an excellent article from Scientific American that is well worth reading in its entirety. The article, published in 2018, uses measles as an example, but the thoughts expressed are extremely relevant in today’s COVID19- and science-denying political climate.

From(Dis)trust in Science:

“While we can all agree that we do not want people to get sick, what is the underlying basis for the idea that the opinions of experts—including scientists—deserve more trust than the average person in evaluating the truth of reality?”

Read more:

https://blogs.scientificamerican.com/observations/dis-trust-in-science/

Evidence-Based Practice in Medicine Part 3: Evaluate the Evidence

In our last installment we looked at the validity of medical claims based on the source of the claim, whether there is cited research, whether the research was published in a peer-reviewed journal, and whether the authors had any conflicts of interest.  Let’s assume you are researching the effectiveness of drinking beet juice to improve your running. A friend shares a link to an article on Facebook.  The article quotes and references some research studies one of which was published in the peer-reviewed journal Applied Physiology, Nutrition, and Metabolism . You search for and find the original research article on pubmed.gov.  So far you are doing good, the article’s author referenced a research study to back up their claims, the research was published in a peer-reviewed journal, and you can read the abstract online. Now, how do you know if this study is any goodYou need to determine what type of study was done and how much evidence that type of study imparts to the research question posed.  

Levels of evidence (sometimes called hierarchy of evidence) are assigned to studies based on the methodological design qualityvalidity, and applicability to patient care. Here’s a chart describing the basic types:  

You should always search for studies with the highest level of evidence, i.e. meta-analyses or systemic reviews that analyze many, many randomized, controlled trials (RCTs) to look for consistent results across large numbers of subjects and conditions.   A randomized, controlled trial (RCT) is, according to Wikipedia: “a type of scientific (often medical) experiment that aims to reduce certain sources of bias when testing the effectiveness of new treatments; this is accomplished by randomly allocating subjects to two or more groups, treating them differently, and then comparing them with respect to a measured response. One group—the experimental group—receives the intervention being assessed, while the other—usually called the control group—receives an alternative treatment, such as a placebo or no intervention.

Level 1 evidence would be the “gold standard” upon which we can make some good medical conclusions regarding our research question. Level 2 evidence is also very persuasive in our decisions to implement new medical treatments based on research findings.  Level 3 and below may provide useful information on trends and indications for future research.  Our beet juice study appears to be level 3 – placebo controlled, but not randomized because there were only 14 subjects who performed with both beet juice and without beet juice (i.e. a crossover study.)  Therefore, you may not want to rush out and buy a huge jug of beet juice to chug before your runs just yet – but the research is intriguing. 

They key words you want to look for when reading about scientific research in the mainstream media or on social media are the words “meta-analysis of randomized controlled trials.”  This means that a number of RCTs have been pooled together and analyzed to provide Level 1 evidence.  Lower levels of evidence may be good starting points for new directions in research. For example, if a coach writes an article about how she has noticed her runners performing better with beet juice supplements that may qualify as an expert’s opinion (Level VII.) It could be an accurate description of her experiences but may also be biased by the coach’s desire to sell beet root supplements. She may want to help athletes avoid the temptation of illegal sports-enhancing drugs or she may just want to improve her notoriety in the running communityA RCT on the use of beet juice designed to avoid these biases would help us see if  beets really do have a physiological effect on running.

No research study methodology is perfect. That’s why research is published, so it can be carefully reviewed by experts and the general public for any flaws, mistakes, or biases that could impact the author’s conclusions. When flaws are identified, new research studies aim to fix those problems and glean further information on the question. Thus, the scientific method is an iterative process where we gradually gather more and more information on a subject to clarify our understanding and produce better and better medical treatments. Yes, sometimes we get it wrong. It’s always important to challenge established conventions when new evidence comes to light. But it’s equally important to understand that new and unusual claims demand a high degree of scrutiny. Learn as much as you can about science and its methods, and you will be able to make better, more informed decisions on your own.  

If you don’t have the time or interest to become adept at evaluating research, you may want to search out expert opinions you can trust. How do you know if an “expert” is trustworthy?  How can you avoid getting scammed by disreputable fakes? That will be the topic of the next installment in this series.  

If you’d like to experience the difference evidence-based, hour-long, physical therapy sessions can make resolving your pain or healing from injury, call OrthoSport Hawaii at 808.373.3555 for more information on scheduling a free online or in-person consult.

 

 

 

 

 

 

Evidence-Based Practice in Medicine Part 2: Know the Source

Recently I’ve been receiving emails from well-meaning friends with links to articles touting this or that “cure” for COVID19.  Unfortunately, these articles are often opinion pieces by bloggers or newsletter writers who want to chime in on rumors or anecdotes they’ve learned about online. When reading these types of articles or evaluating claims that “someone” posted on social media that a friend of a friend forwarded to you, how can you determine if the information presented is legitimate? We’ve all heard about bots and trolls spreading fake news throughout the internet.  What tools do we have to evaluate claims, especially ones that contradict what we may be hearing in the mainstream media? 

Question mark made of pills

In our previous blog post on Evidence-Based Practice in Medicine, we looked at the scientific method and how it helps us to weed out ineffective medical treatments so that we can focus on what works.  We apply the scientific method via research studies, conducted at universities, non-profit foundations, or private companiesThe authors of research studies can present their findings to the public in a variety of ways. Usually, a wellrun study will be submitted to a peer-reviewed research journal and if the methodologiesdata, and conclusions are found to be sound, it may be published. Scholars in the associated scientific field are the experts who must review the research manuscript and decide to publish or send the authors back to the drawing board; hence the term “peer-reviewed.”  

An example of a highly respected peer-reviewed journal in the medical field is The Journal of the American Medical Association (JAMA) which is published 48 times a year by the American Medical Association. Its first issue was published in 1883. Another example is the magazine Science which has been published since 1880. There are many more in a variety of specialty biomedical fields. For example, Physical Therapy Journal was first printed in 1921. Research can also be published in industry journals, consumer magazines, corporate communicationsgovernment publications, and so forth.  

Laptop and hand typing

So, when evaluating “research” we first need to know where the information was published. Was it a personal story told on someone’s blog or social media? Was it a report printed by a private company? Was it published in a popular magazine or corporate newsletter? We can figure this out by looking at the reference cited which indicates where to find the original research paper.  Scientific references follow a specific format that most of us learned in school. PubMed is the most commonly used search engine as it contains over 30 million articles in the biomedical fields. References are often linked so you can click directly to the study and examine it yourself 

Bloggers, talk radio hosts, social media mavens, and others will often cite “research” but don’t actually provide a reference to that research on PubMed, Google or any other search engine.  This is a red flag as it means there is no way to authenticate the claims made. For example, someone could write that “research shows that beet juice improves cardiovascular function” but without a reference that you can look up, how would you know if that’s a factual statement? You can do your own investigation on PubMed or Google Scholar of course. Maybe there is a study on that topic, maybe not. But why wouldn’t the author give you the reference or a link to the study they’re talking about? There could be many reasons but it does make the claim sound suspicious, doesn’t it? 

  Dead End SignNow, let’s say that an author does provide a reference to a research study. You find it on PubMed.  How do you know if it’s a good study? How do you know if there are problems with the study? How do you know if the researchers had a conflict of interest, such as trying to prove a newly manufactured drug is safe in order to beat a competitor to market? Finding the research in a peer-reviewed journal may help avoid faulty and fraudulent research studies, but that doesn’t necessarily mean that the author’s claims are valid.  You may be surprised to learn that some authors will cite a reference that contradicts their claims! They’re counting on the fact that most readers won’t look up the original research or be able to evaluate its findings.  Reader beware! 

But for now, let’s review.  When someone writes “a recent study says…” or “researchers agree…” or similar vague phrases we need to think carefully about what they’re saying and see if we can validate their claims. If no evidence can be found, we may need to dismiss their claims.  So, when evaluating what you read online, start by asking three basic questions:  

  1. Is there published, supporting evidence?
  2. Was it published in a respected, peer-reviewed journal?
  3. Does the study appear free of conflicts of interest?

If you can answer yes to all three questions then your next step would be to look at the type of research study published — be it a case study, randomized control trial, retrospective survey, etc.  Evaluating the evidence presented in different types of research will be the next topic in our EvidenceBased Practice in Medicine series.  

If you’d like to experience the difference evidence-based, hour-long, physical therapy sessions can make resolving your pain or healing from injury, call OrthoSport Hawaii at 808.373.3555 for more information on scheduling a free online or in-person consult.

 

Evidence-Based Practice in Medicine: What Does it Mean? Part 1

 

Stethoscope and data

Prior to the COVID19 outbreak, the average person may have never heard the term “evidence-based” but this term has been utilized in the medical field for decades.  When we say that our clinical practice in physical therapy is evidence-based, we mean that the treatments, techniques, and clinical decisions we make are based on the best research and clinical experience available.  The same applies to MDs prescribing specific drugs and vaccines to combat illness, surgeons choosing specific surgical procedures, dentists applying topical treatments to prevent cavities, nurses wearing special protective gear to prevent infection, and so on.   

So where does the “evidence” come from?  First, let’s consider the alternative. Non-evidence-based medicine might use intuition, tradition, inconsistent anecdotes, personal histories, unsystematic clinical experience, etc. to inform diagnosis and treatment decisions.  Prior to the discovery of the scientific method, these were the tools healers had to work with. If something worked once, perhaps it would work again and over time, they developed a list of “cures” for various ailments. Some legitimately helped, many did not.  Sometimes the cures were worse than the original illness or injury.  For example, during the 1918 Spanish Flu pandemic, aspirin was found to help with feverHowever some patients died from aspirin toxicity when given large doses because we knew so little about that new drug 

We’ve come a long way in medicine since the days of bloodletting, non-antiseptic surgery, and lobotomies. This is because most medical practice today is based on research using the scientific method.  So what is this method? (Adapted from Live Science.) 

The steps of the scientific method go something like this: 

  • Make an observation or observations. 
  • Ask questions about the observations and gather information. 
  • Form a hypothesis — a tentative description of what’s been observed and make predictions based on that hypothesis. 
  • Test the hypothesis and predictions in an experiment that can be reproduced. 
  • Analyze the data and draw conclusions; accept or reject the hypothesis or modify the hypothesis if necessary. 
  • Reproduce the experiment until there are no discrepancies between observations and theory. The reproducibility of published experiments is the foundation of science. No reproducibility – no science and therefore no evidence. 

Some key underpinnings to the scientific method: 

  • The hypothesis must be testable and falsifiable. Falsifiable means that there must be a possible negative answer to the hypothesis. 
  • Research must involve deductive reasoning and inductive reasoning. Deductive reasoning is the process of using true premises to reach a logical true conclusion while inductive reasoning takes the opposite approach. 
  • An experiment should include a dependent variable (which does not change) and an independent variable (which does change). 
  • An experiment should include an experimental group and a control group. The control group is what the experimental group is compared against. 

Here is a visual way to think of the scientific method:

flowchart of scientific method

 

As you can see, the process repeats as we gather more and more data regarding a particular question Additional research helps us to reach conclusions we can apply to activities in the everyday world.  For example, years ago in physical therapy we used to administer ultrasound massage to patients with low back pain. The patients may have felt better after having some warm gel massaged into their low back muscles, but was the treatment affecting the cause of their pain? After many research studies, we now believe that ultrasound likely has no significant effect in the overall cure of low back pain. So nowadays, we rarely apply ultrasound because we have other more effective treatments.  Does that mean that ultrasound never helps? No, but it may not be the best use of our limited time with patients. Deciding whether to use something like ultrasound is part of our clinical reasoning, an important part of our treatment choices; but that’s a topic for another time. 

Through the scientific method, we hope to weed out what doesn’t work, and focus on what does.  We then build on the most effective treatments and as science advances our understanding, we are able to cure illnesses and heal injuries that were untreatable in the past. Whether it’s wiping out smallpox with vaccines or reversing heart disease with changes in diet and exercise, the scientific method helps to improve our health and save our lives.  However, like everything human-made, it’s not perfect. In the next installment of this series, we will look at Levels of Evidence to learn more about why some research studies are flawed or skewed, as well as other ways in which the scientific method fails to fullanswer our questions.  

If you’d like to experience the difference evidence-based, hour-long, physical therapy sessions can make resolving your pain or healing from injury, call OrthoSport Hawaii at 808.373.3555 for more information on scheduling a free online or in-person consult.

Leave A Yelp Review