CDC’s Program Evaluation Journey: 1999 to Present

Daniel P. Kidder, PhD, MS, Centers for Disease Control and Prevention, Office of the Director, Program Performance and Evaluation Office, 1600 Clifton Rd NE, MS D-37, Atlanta, GA 30329, USA. Email: vog.cdc@reddikd

Keywords: program evaluation, monitoring and evaluation, continuous program improvement, performance measurement, CDC

Copyright © 2018, Association of Schools and Programs of Public Health

In the past decade, government agencies, foundations, community- and faith-based organizations, and others have paid increasing attention to using evidence as a decision-making driver for their programs, with a focus on using evaluation and performance management data for program improvement. At the same time, converging factors have shifted perspectives about program monitoring and evaluation, from merely tolerating program monitoring and evaluation as necessary evils to embracing them as essential organizational practices. These factors can be traced to the mid-1990s and the National Partnership for Reinventing Government initiative, which advocated for an enhanced culture of accountability among government agencies. 1 More recently, a series of developments further accelerated the use of program monitoring and evaluation, particularly within government agencies.

In the United States, members of Congress, leaders of state governments, and other decision makers have sought to understand how performance management data can be used to monitor programs and identify when evaluations are appropriate. The 2017 Report of the Commission on Evidence-Based Policymaking, 2 the 2003 and 2013 Government Accountability Office publications on program evaluation, 3 , 4 and the Foundations for Evidence-Based Policymaking Act of 2017 5 are examples of the increasing interest of US government leaders and decision makers in the practice of program monitoring and evaluation. The focus has been on building evaluation capacity (ie, the ability to conduct evaluation and use it to improve results) and on increasing the use of administrative data. The Centers for Disease Control and Prevention (CDC) has incorporated evaluation into its programs earlier than many other organizations.

In this Executive Perspective, we, both health scientists and program evaluators at CDC, highlight the path that our agency has followed to foster the use of evaluation. Our intent is to identify evaluation practices and policies that other organizations can replicate, while also highlighting what we have learned about the challenges of using evaluation. We will describe evaluation at CDC, a federal agency, but the lessons we have learned may apply to any level of government or to any nongovernmental organization that wishes to improve its programs.

In the context in which we are using the term, evaluation refers to collecting, analyzing, and using data to examine the effectiveness and efficiency of programs and to contribute to continuous program improvement. The practice of program evaluation at CDC advanced considerably with the 1999 release of the report, Framework for Program Evaluation in Public Health. 6 Since its release, the Framework has been a primary driver for CDC evaluations and has caused them to become more situation-specific, participatory, aptly designed, and appropriately implemented. The Framework’s 6 steps in evaluation practice and 4 standards for effective evaluation ( Figure 1 ) are used commonly today, but at the time of their release in 1999, they constituted an important shift in thinking at CDC. The 4 Framework standards committed CDC staff members to performing program evaluations that were accurate, useful, feasible, and ethical. The Framework’s steps in evaluation practice placed special emphasis on setting the appropriate evaluation focus (step 3), by engaging stakeholders (step 1), and clearly describing the program to be evaluated (step 2). Step 6 committed CDC staff members to ensuring that evaluation findings were used and lessons learned were shared, elements that had been missing from many previous CDC evaluations. More importantly, the Framework ensured that CDC staff members viewed evaluation as integral to a cycle of continuous program improvement. As used at CDC, this cycle consists of program planning, implementation, measuring and monitoring of performance, and evaluation, all operating in concert to iteratively implement, test, and refine program approaches for maximum effectiveness ( Figure 2 ). The Framework continues to serve as the backbone of the CDC evaluation process.

<a href=An external file that holds a picture, illustration, etc. Object name is 10.1177_0033354918778034-fig1.jpg" />

Centers for Disease Control and Prevention Framework for Program Evaluation in Public Health, 6 1999. The Framework was developed to guide public health professionals in using program evaluation, was designed to summarize and organize the essential elements of program evaluation, and comprises 6 steps in evaluation practice and 4 standards for effective evaluation.

<a href=An external file that holds a picture, illustration, etc. Object name is 10.1177_0033354918778034-fig2.jpg" />

The continuous program improvement cycle, used internally by the Centers for Disease Control and Prevention Program Performance and Evaluation Office. The parts of the cycle operate together to iteratively implement, test, and refine program approaches for maximum effectiveness.

CDC is a large and complex organization. Its size and complexity present challenges to the evaluation of CDC programs, but they also allow evaluation to be incorporated into many programs at once. CDC comprises 13 centers, institutes, and offices, each with its own budget lines, constituencies, and areas of focus. Each center also has its own culture, and the pace of organization-wide change can be helped or hindered by these cultures. Nearly 80% of CDC funding goes to extramural organizations, and this affects how program evaluation is adopted at the agency. 7 Public health work usually occurs on the front lines, at state and local health departments, ministries of health, and community-based organizations. The Framework’s organizational emphasis on stakeholders and program evaluation has pushed CDC staff members and the agency’s funded partners to be mindful of the perspectives, needs, and constraints of those working on the front lines. The result has been a recognition that CDC program evaluations must be useful not only to CDC, but also to the agency’s local partners and communities affected by CDC programs.

In 2010, CDC created its first chief evaluation officer position (T.J.C.), who was charged with overseeing and championing program evaluation across CDC. This new position was housed in a new Program Performance and Evaluation Office (PPEO), which brought program evaluation, performance measurement, and planning, previously dispersed across CDC, under a single high-level office (the Office of the Director), and allowed one office to champion continuous program improvement principles across the agency. Another evaluation officer position was created in 2013 (D.P.K.), and both officers are currently involved in the PPEO leadership.

Several CDC directives and reports have also advanced program evaluation at CDC, by compelling program leaders to focus on applied evaluation and providing them with the support needed to do it. In December 2012, then-CDC Director Thomas Frieden issued a report, Improving the Use of Program Evaluation for Maximum Health Impact: Guidelines and Recommendations. 8 This report came out of consultations with experts at CDC, and it incorporated recommendations from the Advisory Committee to the CDC Director. It described monitoring and evaluation expectations for the centers, large programs, and recipients of CDC funding. It also committed CDC to investments that would support evaluation capacity at the agency. This report was a substantial step toward encouraging strong program evaluation at CDC.

One result of the director’s report was the creation of a revised template for CDC non-research funding opportunity announcements (Notice of Funding Opportunity [NOFO]). Previously, applicants for funding often reported that CDC NOFOs varied widely from program to program. The report recommended that CDC programs use a standard template for all new non-research funding opportunity announcements. This new template, which is still in use, requires the inclusion of a simple logic model that graphically depicts program activities and intended outcomes. The logic model serves as an outline for subsequent sections, in which CDC programs describe for potential applicants the activities, work plans, and outcomes that are expected. More importantly, the new NOFO template specifies the process and outcome measures that recipients are expected to report on to demonstrate implementation of their strategies and activities and achievement of outcomes. The template also instructs funding recipients to demonstrate how their findings will be used to improve the effectiveness of their activities.

To support use of the revised template, PPEO staff members have provided technical assistance to CDC program staff members and reviewed drafts of the approximately 70 NOFOs produced each year at CDC. Although implementation of the revised NOFO template has had its challenges, CDC staff members now report a level of alignment and clarity that did not previously exist in CDC NOFOs. The new template has helped CDC staff members understand logic models and use them to more clearly describe programs and monitoring and evaluation expectations. It has also encouraged funded recipients to incorporate evaluation earlier in their program planning.

The PPEO has helped CDC programs implement other recommendations from the 2012 director’s report by providing ongoing support for evaluation training and capacity building. This support has helped CDC to integrate evaluation into its programs and use data to continuously improve those programs. In 2011, the PPEO began an Evaluation Fellowship Program. CDC programs interested in improving their use of evaluation may apply to have a Fellow work with them. Fellows have a doctorate or master’s degree and experience in applied evaluation. They work with their host program for up to 2 years. They also conduct 12-week evaluation projects with other programs, using about 20% of their time, as a way to provide expertise to those programs without a Fellow. To date, more than 100 Fellows have completed the Evaluation Fellowship Program, almost all have remained in public health, and about 60% have stayed at CDC. These Fellows have helped to improve evaluation practice throughout CDC and the public health field.

The PPEO also uses a limited amount of its funding to maintain a group of subject matter experts who are proficient in evaluation and program planning. CDC programs whose staff members lack expertise in evaluation may apply for subject matter expert consultations, and about 20 programs are selected to receive these consultations each year. Each consultation is limited to about 20 hours, which is typically enough time to help programs work through specific challenges, such as developing a program logic model, identifying evaluation questions, or determining the most appropriate evaluation design.

The PPEO also provides CDC staff members with other tools, resources, webinars, and trainings, including 10 courses at CDC University, all taught by PPEO staff members and outside experts. One of these courses is the program evaluation course, which uses the CDC Framework and is a required course for all CDC project officers (ie, those who oversee funded opportunities). The course provides an evaluation foundation for CDC staff members who have primary contact with those who implement and evaluate programs on the front lines in local and state jurisdictions and internationally.

The PPEO has worked to create a culture of learning and evaluation capacity building across the agency, with the goal of assembling a network of evaluators who learn and gain support from each other. The CDC evaluation Listserv now has more than 1000 members, and many CDC centers and campuses have evaluation communities of practice that meet regularly to share information about evaluation processes and products.

The PPEO also sponsors other regular learning opportunities, including 4 thematic roundtables each year. The roundtables are interactive, led by outside experts and CDC peers, focused on evaluation topics, and provided in partnership with other CDC offices. About 20 roundtables each year feature speakers who discuss topics such as evaluating social media, training, and policy; communicating evaluation data (eg, data visualization); and the intersection of performance management and evaluation. The roundtables bring together CDC staff members who are facing similar challenges. The PPEO also oversees an annual CDC Evaluation Day, during which CDC peers deliver more than 100 oral presentations and posters focused on evaluation challenges and successes. This event, which draws about 400 people each year, provides CDC staff members with another opportunity to interact with colleagues and learn about evaluation.

Despite CDC’s progress in program evaluation, the PPEO must continue its work. The PPEO is helping CDC offices to better understand and use the data being collected from funded recipients and to improve the timeliness of their feedback to funded recipients for program improvement. Also, the PPEO is approaching performance monitoring in the same way that it has approached program evaluation—by using roundtables, trainings, webinars, and technical assistance. Ultimately, the goal is for CDC programs to consistently integrate evaluation and continuous program improvement into its internal and funded programs, from inception to completion.

Footnotes

Authors’ Note: The findings and conclusions in this article are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention.

Declaration of Conflicting Interests: The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The authors received no financial support for the research, authorship, and/or publication of this article.

References

1. US General Accounting Office. Reinventing government: status of NPR recommendations at 10 federal agencies . GAO-GGD-00-145 https://www.gao.gov/new.items/gg00145.pdf. Published September 2000. Accessed May 25, 2018.

2. Commission on Evidence-Based Policymaking. The promise of evidence-based policymaking: report of the Commission on Evidence-Based Policymaking . https://www.cep.gov/content/dam/cep/report/cep-final-report.pdf. Published September 2017. Accessed May 25, 2018.

3. US Government Accountability Office. Program evaluation: strategies to facilitate agencies’ use of evaluation in program management and policy making . GAO-13-570 https://www.gao.gov/assets/660/655518.pdf. Published June 2013. Accessed May 25, 2018.

4. US General Accounting Office. Program evaluation: an evaluation culture and collaborative partnerships help build agency capacity . GAO-03-454 https://www.gao.gov/new.items/d03454.pdf. Published May 2003. Accessed May 25, 2018.

5. HR 4174, 115th Congress, 1st Sess (2017).

6. Centers for Disease Control and Prevention. Framework for program evaluation in public health . MMWR Morb Mortal Wkly Rep . 1999; 48 ( RR-11 ):1–58. [PubMed] [Google Scholar]