Skip to: content | navigation

The ASHA Leader OnlineFEATURE

Evidence-Based Practice: Myths and Realities

Levels of Evidence l Developing Evidence-Based Practice Guidelines

see also: Selected Resources | Selected References

cite as:
Dollaghan, C. (2004, April 13). Evidence-based practice: Myths and realities. The ASHA Leader, pp. 4-5, 12.

by Christine Dollaghan

How do you feel when you hear the words "evidence-based practice" or "EBP"? In talking about EBP with clinicians, colleagues, and students during the past few years, I've seen reactions ranging from euphoria (admittedly rare) to outrage (thankfully also rare).

The most common feeling, however, seems to be a mixture of curiosity and anxiety: curiosity about the reasons for the "buzz" about EBP, and anxiety over the possibility that EBP will turn out to be just one more unrealistic demand placed on already over-burdened professionals. By a brief description of some of the myths and realities of EBP, I hope to encourage the "EBP-curious" to feel considerably more confident about what this perspective on clinical decision-making can offer to those willing to keep both euphoria and outrage at bay.

Myths and Definitions

By now, most people are familiar with the definition of EBP as  ". . .the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients . . . [by] integrating individual clinical expertise with the best available external clinical evidence from systematic research" (Sackett et al., 1996). However, some parts of this definition ("best available external clinical evidence from systematic research") seem to get a lot more attention than others ("individual clinical experience"). So the first myth about EBP that needs to be dispelled is the idea that evidence from systematic research is the only acceptable basis for clinical decision-making. As Guyatt and colleagues (2000) note, "evidence is never enough;" the EBP framework acknowledges that the experiences, values, and preferences of ourselves and our patients can and should contribute to our clinical decisions.

EBP does require us, however, to identify and make use of the highest quality scientific evidence as one component of our efforts to provide optimal patient care. Unfortunately, this worthy goal is linked to another myth about EBP: namely, that it requires clinicians to spend hours each week scouring the hundreds of newly published articles and textbooks for nuggets of evidence "gold." No practitioner has the time and few have either the inclination or the expertise for such a task.

Instead, proponents of EBP (e.g., Sackett et al., 2000) suggest several strategies by which clinicians can find the relatively rare evidence that is of sufficient quality to influence clinical practice, while at the same time ignoring, or better yet avoiding altogether, the deluge of weaker evidence. These authors suggest that practitioners are likely to have no more than 30 minutes per week to devote to locating and evaluating evidence; thus, their suggestions are oriented around this minimal time investment.

One of their suggestions is that practitioners focus their limited time on evidence from "high-yield" sources. Such sources contain evidence that is current, of high quality (according to the criteria described below), and directly applicable to clinical practice. Sackett et al. (2000; 1999) urge us to examine journals and evidence compilers such as those described below to identify the one(s) most likely to contain quality evidence, and to limit ourselves to these rather than devoting time to low-yield and/or dated sources such as traditional textbooks and journals oriented to "basic science." The de-emphasis in EBP on evidence sources that are difficult to update rapidly, such as traditional textbooks, derives from the explicit acknowledgment that what we "know" at any point is virtually guaranteed to change as science progresses, so our efforts to identify current best evidence should focus on the most contemporary sources.

Similarly, the EBP orientation disavows the longstanding belief that all basic science findings are relevant to clinical practice. The goals, designs, and methods of studies aimed at providing strong answers to questions about clinical practice are in some respects quite different from those of studies aimed at understanding basic mechanisms of disease. In the EBP framework, evidence from studies of basic mechanisms plays a similar role to evidence derived from personal experience or the opinions of authorities; all of these sources can provide fruitful "leads," but these must be followed up in subsequent studies explicitly designed to address questions about clinical practice.

Internet access to high-yield sources and sites exponentially decreases the time needed to locate current best evidence. For example,, a free resource sponsored by the Agency for Healthcare Research and Quality, provides a compilation of evidence reviews and practice guidelines published by a variety of groups on a wide range of topics. Clinicians can search the Web site for information on specific topics or browse for guidelines in category headings. Although the bulk of information concerns medical conditions, the site contains a number of guidelines on such topics as hearing screening, autism, attention deficit hyperactivity disorder, learning disorders, and others, providing busy practitioners with rapid access to a synthesis of information on screening, diagnosis, treatment, and prognosis.

Individuals can also register to receive free weekly e-mail updates listing new or revised guidelines, and those of interest can be accessed in a matter of seconds. Similarly, PubMed ( is a free site sponsored by the National Library of Medicine, in which users can search for specific information from among literally millions of biomedical and other life science citations, and in many cases, the complete article can be accessed online. The PubMed site has a number of extraordinarily helpful features, such as a "cubby" in which an individual user can store results from previous searches and ask "what's new" on that topic at a later date, again in a matter of seconds. PubMed also has a "clinical query" search, specifically designed to allow searches concerning diagnosis, therapy, etiology, and prognosis for a given condition using research methodology filters that increase the likelihood that results will be directly relevant to clinical practice. Finally, sites such as the Cochrane Library ( develop and report systematic evidence reviews on a wide range of topics. These abstracts are available at no charge.  

The availability of millions of articles and tens of sites containing evidence makes it easy to debunk a third myth about EBP: namely, that clinicians can or should be able to "stay current" on every aspect of clinical practice at all times. Instead, Sackett et al. (2000) assert that we seek evidence mainly when we have specific questions about specific patients, disorders, or procedures. Formulating a specific question (e.g., "Compared to direct, clinician-administered therapy, are parent-administered programs effective treatments for 3-year-olds with specific language deficits?") makes it much easier to zero in on what will usually be a relatively small set of articles. These can then be scanned rapidly to determine whether their quality appears sufficiently high to warrant a full reading. Day-to-day clinical activity will often proceed on the basis of our existing knowledge and experience; EBP implies not that we upend everything that we think we know, but rather that we upgrade our knowledge base in response to particular clinical questions in the explicit, judicious, and conscientious manner described in the definition of EBP.

Critical Appraisal

Evaluating evidence quality depends on a process of critical appraisal, which has been described by a number of authors working in EBP but has been applied only rarely in the literature on communication disorders (e.g., Yorkston et al., 2001). A myth about critical appraisal is that only people who have completed years of specialized study can do it. In fact, Sackett et al. (2000) describe critical appraisal in some detail, and worksheets for evaluating systematic reviews and articles concerning studies of diagnosis, treatment, prognosis, and harm can be found in the section titled "Teaching Materials" at

Some of the criteria will be familiar to clinicians (e.g., Were there statistically significant differences between treated and untreated groups? Were the outcome measures valid and reliable?) but others are less familiar, being more specifically tied to studies addressing clinical questions (e.g., Were patients assigned randomly to groups? Were evaluators blinded to group assignment? Were the group differences large or practically significant?). The many excellent sources of accessible information on critical appraisal, including clear and concise self-tutorials (e.g., at make it possible for interested individuals to learn to evaluate evidence quality at whatever level of intensity or commitment they choose.

Familiarity with the process of critical appraisal allows us to reject the myth that studies with certain designs, in particular randomized controlled trials (RCTs) of treatment, always provide high quality evidence. Like any other type of study, RCTs can be designed and conducted well or poorly; only those studies that meet the critical appraisal criteria can yield strong evidence concerning treatment. By applying the critical appraisal criteria, we identify the strengths and weaknesses in all kinds of studies, providing a principled basis for resolving disagreements about the optimal approaches to client care. Because few studies meet all of the critical appraisal criteria, reasonable people can disagree about the quality of evidence from a particular study, making it important for individuals to think independently about the validity, importance, and precision of results from empirical studies as a prelude to applying them to clinical care.

In reality, EBP is neither the panacea nor the bugaboo that its mythology has suggested. Rather, EBP offers us a framework and a set of tools by which we can systematically improve in our efforts to be better clinicians, colleagues, advocates, and investigators-not by ignoring clinical experience and patient preferences but rather by considering these against a background of the highest quality scientific evidence that can be found.  

End Article Logo

Christine Dollaghan is a professor in the Department of Communication Science and Disorders at the University of Pittsburgh. She is chair of the ASHA Research and Scientific Affairs Committee, that has prepared a technical report on evidence-based practice in communication disorders which is currently undergoing review. Contact her by e-mail at

©1997-2006 American Speech-Language-Hearing Association - Copyright Notice and Legal Disclaimer