Use Evaluation Strategies
You have probably heard “If you want to improve something, measure it.” or “You can’t improve something if you don’t measure it.” This holds true in health care for more than lab values.
To improve health care delivery it is important to know what factors you are aiming to improve and use the correct measure to evaluate your efforts. The measurement data can help:
- Diagnose strengths and weaknesses in practice performance
- Make improvement and innovations in services
- Manage patients' needs more effectively
- Evaluate changes in results over time
Principles for Using Data to Support Clinical Improvement 
- Seek usefulness in measurement
Decide what data will be most useful to measuring the change you are seeking, and is relatively easy to collect. Data that can be captured during the routine flow of daily work may be the easiest to collect and a good starting place. Only collect data that you will use. The utility of data is directly related to the timeliness of feedback and the appropriateness of its level of detail for the persons who use it.
- Use a balanced set of process, outcome, and cost measures
Typically one data point is affected by a previous intervention or interaction and is followed by a another data point or result. Balanced measures may cover "upstream" processes and "downstream" outcomes to link:
- causes with effects
- anticipated positive outcomes and potential adverse outcomes
- results of interest to different stakeholders such as patient, family, employer, community, payer, and clinician
- cumulative results related to the overall aim
- specific outcomes for a particular change cycle
- Keep measurement simple (think big, but start small)
Focus data collection on a limited, manageable, and meaningful set of measures.
- Use quantitative and qualitative data
Quantitative data reports clinical outcomes and behaviors. Qualitative data identifies how the patient and health care team experience a new procedure or system.
- Have agreement on the definition of each measure
The better the operational definition of each measure, the better the data elements. The better the data elements, the more reliable and valid the aggregate measures.
- Measure small, representative samples
Small, representative samples can provide useful data without the cost and time of collecting data on all patients and on all encounters.
- Build measurement into daily work
Make data collection as easy as possible by collecting key measures as part of everyday work.
- Develop a measurement team
Team up to lighten the data collection workload, add knowledge, and boost morale.
Outcome and process measures
Measures that can show improvement in the care of people with diabetes:
Diabetes Outcome Measures – are typically laboratory measures that indicate clinical status such as A1C, blood pressure, and cholesterol levels.
Process Measures – provide insight as to what processes influence the outcome measures such as the percentage of patients who receive diabetes self-management education and meet their target diabetes measures. Other measures could determine how well decision support, clinical information systems, and delivery system design such as team care are working. It is important that changes to improve one part of the system do not cause new problems in other parts of the system.
Differences Between Quality Improvement and Clinical Research
An understanding of the differences between quality improvement and clinical research can help users to structure systems changes and their evaluation without going through the Institutional Review Board (IRB).
Collection of data – Clinical research vs. Quality improvement
To develop new knowledge, clinical research uses rigorous methods to measure variables of interest and prevent the effects of confounding variables.
By comparison, clinical improvement studies aim to improve outcomes through the application of known information.
- The interventions are observable to the investigator (e.g., self-management education to enable patients to meet their management goals)
- Just enough data is collected to test whether the change results in an improvement (Did patient education improve self-care practices?).
- Multiple steps are tested in a sequential fashion (Cycle 1: invitation to participate in self-management education. Cycle 2: level of patient participation. Cycle 3: Use of self-care practices, etc).
Enumerative vs. Analytic statistics
Enumerative statistics are used in clinical research to evaluate the outcome of testing a hypothesis. The analysis assumes a stable system - one in which all variables are held constant except the one under study. The goal is to estimate whether the outcomes between the control and study group are different. The statistics ascribe a degree of confidence to the accuracy of the estimate.
Analytic statistics are used to evaluate quality improvement efforts. Using Plan-Do-Study-Act (PDSA) cycles is one way to evaluate clinical improvement. The goal of the analysis is to determine the stability of the process producing the data.
For example, will the patient call system that increased the rate of eye exams from 36 percent to 70 percent consistently result in the higher percentage of patients having annual exams?
In this example the accuracy of the measure is not the issue (was the improvement in the rate of eye exams 70 percent or 68 percent or 72 percent?) Rather, if the process is statistically stable, one can assess its current performance and take action either to predict future performance or to measure the effects of an improvement intervention. For example, now that eye exam rates have improved to 70 percent, how can we further improve the system to increase the rate to 90 percent?
Translation of clinical trials
While much clinical knowledge of diseases and their treatment is generated through clinical trials, the results of those trials will best be applied to patient populations through the application of quality improvement methods. Unlike the controlled clinical trials that generate such knowledge, patients live in a world with many sources of variation that cannot be controlled.
Clinical efficacy is the desirable outcome that is associated with an intervention under ideal circumstances such as in clinical trials. For example, the Diabetes Control and Complications Trial demonstrated that:
- intensive management compared with standard care resulted in significant lowering of A1C.
- improved control resulted in a 35 percent decrease in microvascular complications for each one percent reduction in A1C.
Clinical effectiveness associates the desirable outcomes with an intervention in the real world. For example, Health Partners of Minnesota provides a Clinical Indicators Report of comparative provider performance on measures of clinical quality, patient experiences, and affordability.
The difference between the efficacy and effectiveness of an intervention defines the performance "gap." Implementation of quality improvements can successfully bridge the gap and result in improved clinical outcomes. These improvements can be developed and implemented through multiple small-scale improvements in components of the health care delivery system. See PDSA cycles in section on How to Transform Practices.
Individual physicians will interpret the results of studies and determine whether or how to incorporate the findings in their clinical practice. Unlike clinical studies, the patients receiving treatment are not a selected population so results of clinical trials are applied in an environment that differs from a research setting. To make better predictions and decisions, analytic statistics can assess deviations from expected results and identify sources of variation.
Resources and References
National Guideline Clearinghouse (NGC)
(NGC) is a publicly available database of evidence-based clinical practice guidelines and related documents.
National Quality Measures Clearinghouse
NQMC is a public resource for evidence-based quality measures and measure sets.
NCQA Diabetes Recognition Program
The National Committee for Quality Assurance (NCQA) has developed the Diabetes Recognition Program. This voluntary program recognizes physicians and other clinicians who use evidence-based measures and provide quality care to their patients with diabetes.
Note that performance measures are indicators or tools to assess the level of care provided within systems of care to populations of patients with diabetes. They do not reflect either the minimal or maximal level of care that should be provided to individual patients with diabetes.
National Diabetes Education Program Control and Prevention Campaigns
NDEP translates the latest science into campaigns and materials.