Evaluating Computer Science Education: Why and for Whom?
By Jill Denner, PhD | November 5, 2015
Senior Research Scientist, ETR
Note: ETR’s Jill Denner recently contributed a post to the American Evaluation Association’s blog AEA365 | A Tip-a-Day by and for Evaluators. This was part of their STEM Education and Training Topical Interest Group Week. With AEA’s permission, we are reposting Dr. Denner’s article. You can find the original here. If you’ll be attending AEA’s “Evaluation 2015” conference in Chicago next week, be sure to look for ETR’s team of researchers. Attending members include Pam Drake, Lisa Unti, BA Laris, Liz McDade-Montez and Jill Glassman.
My name is Jill Denner from Education, Training, Research (ETR), a nonprofit organization that does research, evaluation, program development and professional development. We partner with schools and colleges to increase the interest and capacity of girls and Latino/a students to pursue computer science.
Computer Science Education in K–12 is a relatively new space. It is a young discipline that is trying to distinguish itself from other Science, Technology, Engineering and Mathematics (STEM) fields. And rightfully so.
The “T” is different in many ways: There is less diversity in “T” classes and programs. Most programs do not have clear goals or a logic model to describe how their activities will lead to identified goals. There are many different learning outcomes, but few validated measures, established theories or clear stakeholders who can drive key decisions about evaluation design, sampling and measurement.
Evaluation can make real contributions to a field that is in its infancy, but we need to know the stakeholders or audience and what they want to know. In CS Education there are different views, but most want to know who is benefiting and why or why not. For example:
- Funders require evaluation to document return on investments. These include the U.S. National Science Foundation, private foundations such as the Motorola Solutions Foundation, and tech companies such as Google.
- Educators or program developers want evaluation to help them make improvements in impact, design new programs and get more funding.
- Policymakers want to know what programs or policies to invest in.
- Researchers want to inform theory, test hypotheses and fill gaps in knowledge.
How can you do good evaluation without established theories, logic models or measures? This issue is particularly relevant for a field that places a high priority on increasing diversity. The following issues are important to consider when evaluating CS Education:
- Culturally responsive evaluation can help evaluators avoid perpetuating unconscious bias about the type of person who belongs in computing fields.
- Getting demographic information is important, but asking students about their gender or race/ethnicity before questions about computing might trigger stereotype threat and affect responses.
- Studying only individual factors misses the relational and institutional factors that affect participation and program impact.
- Visit Mark Guzdial’s blog to learn about issues that are central to Computer Science Education, including his article on challenges that face Computing Education.
- Kimberly Scott and her colleagues have developed a theory of culturally responsive computing.
- Talking points on unconscious bias in the classroom from the National Center for Women in IT can help evaluators avoid triggering stereotype threat.
- Google’s recent reports on computer science education provide landscape data on key issues.
Jill Denner, PhD, is a Senior Research Scientist at ETR. She does applied research with a focus on increasing the number of women, girls and Latino/a students in computing. She is nationally recognized as an expert in strategies to engage girls/women and Latino/a students in computer science. She can be reached at firstname.lastname@example.org.