Searching Medline is especially useful for rare events such as case reports or rare drug side effects or interactions. This approach is cumbersome for day-to-day questions because searches typically turn up many articles that are not strong enough to be clinically useful and because searches miss many relevant articles. Searching can be made more sensitive and specific by using specific strategies [12,13]. Searching is now free and relatively easy to accomplish for anyone who is connected to the Internet (PubMed at www.ncbi.nlm.nih.gov/PubMed;
or Grateful Med at the same address). Books.
Printed textbooks are familiar, easy-to-use sources of information, especially for issues where the information base is not changing rapidly (eg, diagnosis of the acute abdomen). However, for time-sensitive questions, few of us have a current, comprehensive collection and even newly published textbooks are several months out of date, at a minimum, when they are first published. Journal reviews
. Reviews published in journals are relatively current at the time of publication; a typical time lag between acceptance and publication is six months. Traditional narrative reviews are most suitable for multifaceted questions (eg, modern management of diabetes mellitus). Disadvantages are that they tend to lag behind the best research evidence at the time they are written  and may reflect the biases of the author(s) but not make them explicit. Web Sites.
The world wide web includes credible, up-to-date sources of medical information in fast-moving fields. Some especially useful ones are: Health advice for international travelers: www.cdc.gov;
National Guideline Clearinghouse: www.guideline.gov;
Patient Support Organizations: healthhotlines.nlm.nih.gov/subserch.html.JUDGING THE CREDIBILITY OF RESEARCH RESULTS
? Clinicians should have the ability to do an in-depth analysis of research articles that are especially important to their practice and are controversial. They should be in a position to experience the power, independence, and enjoyment of critically analyzing articles on their own or with colleagues in a local journal club.The basic elements of critical reading are: Internal validity
? Are the results of clinical research correct for the patients in the study? Internal validity is threatened by two processes, bias and chance. Bias is any systematic error (eg, in assembling patients for study, allocating them to comparison groups, following them up, and measuring outcomes) that might distort the observed result relative to the true situation. Chance is random error, inherent in all observations. The probability of chance effects can be minimized by studying a large number of patients and is described by p-values (the probability of a false-positive result), power (the probability of a false-negative result) and by confidence intervals (for the range that is likely to include the true effect size) (show figure 2). Generalizability
? Do the results of the study apply to my patients? Study patients are typically highly selected relative to patients in usual practice. They are referred to academic medical centers, have classic disease, do not have other diseases, and are willing to cooperate. As a result, they may be systematically different from the patients most doctors see from day to day. The user of research results must make a well informed judgment about whether the study patients are similar enough to be a guide to their care, or how the guidance should be modified to suit individual patients.
The appropriate research design depends on the question (show figure 3) . As an example, a randomized, controlled trial is best for information on the effects of a therapeutic or preventive intervention, while a cross-sectional study is best for the evaluation of diagnostic test performance.
Few physicians were taught critical appraisal skills in medical school and residency. Now there are many opportunities to learn critical reading skills from books [15-18], journal articles, courses, and special sessions of professional meetings.
Full critical appraisal, one article at a time, is time-consuming and not feasible for most practicing physicians most of the time. A variety of short-cuts, of varying effectiveness, are used to delegate critical appraisal, such as relying on a respected journal or trusted colleague. Readers should understand that these proxies are far from perfect. In a study of the effects of peer review and editing on the quality of reporting in articles published in Annals of Internal Medicine, the articles were improved but well short of perfection after careful review .
Critical appraisal skills, short of full, independent reviews, can be useful in day-to-day information management. These skills help clinicians make wiser choices of information sources ? for example, by looking at what they cite as evidence and how they weigh evidence from conflicting studies. These skills can also make informal reading more efficient by making it easier to spot especially strong or weak articles.APPLYING STUDY RESULTS TO THE CARE OF PATIENTS
? Studies of the care of patients in many settings have consistently shown a gap between the recommendations of experts, based upon the best available evidence, and actual practice. Barriers include a genuine concern about applying the results of large studies to individual patients, misunderstanding of the evidence itself, not being aware of the research results, and problems with how care is organized .
Tailoring research results to individual patients ? The general guidance to the care of patients provided by clinical practice guidelines, algorithms, systematic reviews (including meta-analyses), and the like are meant to describe the best course of action on average, everything else being equal. To come closer to an estimate of what research results would be for an individual patient, it may be possible to find the answer in subgroups of the study patients, defined by such characteristics as age, sex, severity of disease, and presence of risk factors. Examining subgroups is worth the effort, but with two caveats. Because studies are designed to have just enough patients to detect the main effect, subgroups may include too few patients to find effects, even when they are really present; that is, subgroups are at risk for false negative results. Furthermore, when many subgroups are examined there is an increased risk that one of them will show effects by chance; that is, a false positive result.
Another way to bring rigorous research results to bear on individual patients is to do a "trial of N = 1" . Alternative treatments are given to the patient in random order and the patient, who is kept unaware of which treatment he or she has taken, reports the effect. After many repetitions the pattern of results is as scientifically rigorous as a randomized controlled trial, but just for that one patient. Trials of N = 1 are a more rigorous version of therapeutic trials, or trial and error, already widely used in medicine. They can only be done for conditions such as migraine or hypertension that respond quickly to interventions, and with interventions such as some drugs to which patients can be "blinded."
Even after efforts to obtain research results that match the individual patient as closely as possible, the evidence must be interpreted in relation to the individual patient. Evidence-based medicine is not intended to replace clinical judgment . Each individual patient will be cared for with the best research evidence as a benchmark, but with care tailored to their individual circumstances - genetic makeup, past and concurrent illnesses, health-related behaviors, and personal preferences.
Applying the evidence in practice ? A substantial body of research, as well as practical experience, has demonstrated that all of us, as we care for patients, engage in systematic errors of omission or commission, relative to the best available research evidence. Prominent examples are the widespread failure to prescribe beta-blockers after acute myocardial infarction or "controller" medications (eg, inhaled corticosteroids) in persistent asthma, the prescription of antibiotics for acute cough, or the use of radiologic tests for uncomplicated acute low back pain.
In some cases, failure to practice according to the best current evidence is out of ignorance. But knowledge alone rarely changes behavior . The table lists the possible influences on clinicians' behavior, roughly in descending order of strength, based on a growing research literature on physician behavior change and on common sense (show table 3). Usually, no single influence is strong enough to make important changes; combinations are necessary. In general, changing clinical behavior requires not just information, but also time set aside for rethinking practice habits.
1. Sackett, DL, Straus, SE, Richardson, WS, et al. Evidence-based medicine. How to practice and teach EBM, 2nd edition, Churchill Livingstone, Edinburgh 2000.
2. Sackett, DL, Rosenberg, WM, Gray, JA, et al. Evidence-based medicine. What it is and what it isn't. BMJ 1996; 312:71.
3. Geyman, JP, Deyo, RA, Ramsey, SD. Evidence-Based Clinical Practice, Butterworth-Heinemann, Woburn, MA 1999.
4. Isaacs, D, Fitzgerald, D. Seven alternatives to evidence based medicine. BMJ 1999; 319:1618.
5. Richardson, WS, Wilson, MC, Nishikawa, J, Hayward, RS. The well-built clinical question: a key to evidence-based decisions. ACP J Club 1995; 123:A12.
6. Covell, DG, Uman, GC, Manning, PR. Information needs in office practice: Are they being met? Ann Intern Med 1985; 103:596.
7. Williamson, JW, German, PS, Weiss, R, et al. Health science information management and continuing education of physicians. Ann Intern Med 1989; 110:151.
8. Fletcher, RH, Fletcher, SW. Evidence-based approach to the medical literature. J Gen Intern Med 1997; 12 Suppl 2:S5.
9. Godlee, F. The Cochrane Collaboration. Deserves the support of doctors and governments. BMJ 1994; 309:969.
10. Hayward, RS, Wilson, MC, Tunis, SR, et al. More informative abstracts of articles describing clinical practice guidelines. Ann Intern Med 1993; 118:731.
11. Lau, J, Antman, EM, Jimenez-Silva, J, et al. Cumulative meta-analyses of therapeutic trials of myocardial infarction. N Engl J Med 1992; 327:248.
12. Haynes, RB, Wilczynski, NL, McKibbon, KA, et al. Developing optimal search strategies for detecting clinically sound studies in MEDLINE. J Am Med Inform Assoc 1994; 1:447.
13. McKibbon, KA, Walker-Dilks, CJ. Beyond ACP Journal Club: How to Harness MEDLINE for therapy problems. [Editorial]. ACP J Club 1994 July-Aug:A-10 (Ann Intern Med vol 121, suppl 1).
14. Antman, EM, Lau, J, Kupelnick, B, et al. A comparison of results of meta-analyses of randomized controlled trials and recommendations of clinical experts. JAMA 1992; 268:240.
15. Fletcher, RH, Fletcher, SW, Wagner, EH. Clinical Epidemiology. The Essentials, 3rd ed, Williams and Wilkins, Baltimore 1996.
16. Sackett, DL, Haynes, RB, Tugwell, P. Clinical Epidemiology. A Basic Science for Clinical Medicine. 2nd ed, Little, Brown Co, Boston 1991.
17. Riegelman, RK, Hirsch, RP. Studying a study and testing a test. How to read the medical literature, Little, Brown Co, Boston 1989.
18. Users Guide to the Medical Literature. A manual for evidence-based clinical practice. Chicago, AMA Press, 2002. www.usersguides.org
19. Goodman, SN, Berlin, J, Fletcher, SW, Fletcher, RH. Manuscript quality before and after peer review and editing at Annals of Internal Medicine. Ann Intern Med 1994; 121:11.
20. Haynes, B, Haines, A. Barriers and bridges to evidence based clinical practice. BMJ 1998; 317:273.
21. Guyatt, G, Sackett, DL, Taylor, DW, et al. Determining optimal therapy randomized trials in individual patients. N Engl J Med 1986; 314:889.
22. Davis, DA, Thomson, MA, Oxman, AD, Haynes, RB. Changing physician performance. A systematic review of the effect of continuing medical education strategies. JAMA 1995; 274:700.