Monash University Publishing | Contacts Page
Monash University Publishing: Advancing knowledge

Digital Divas

CHAPTER 2

EVALUATING DIGITAL DIVAS

How Will We Know if the Program Works?

Overview

This chapter describes our position on the importance of evaluation in designing intervention programs, and in particular how we developed our evaluation framework. It presents some of the literature that informed the planning stages and how the research team considered the various inputs to the program, a discussion of anticipated outputs and the assumptions that were made. The chapter concludes with consideration of how we measured the success of the program.

Introduction

Since the 1990s in Australia, there have been many activities aimed at raising schoolgirls’ awareness of careers in IT and stimulating their interest in the field. These activities included: the creation and distribution of career videos; conducting ‘girls in computing’ days or summer camps; and running computer clubs for girls (Craig, Scollary, & Fisher, 2003). Many of these initiatives have been short-term, localised, poorly funded and have depended very much on one key individual usually in schools (Craig, Fisher, & Lang, 2009).

The effectiveness of these interventions is questionable. Evidence suggests that short-term, one-off events targeting girls have not been effective, if the declining rates of women participating in the IT workforce are used as a guide. Research indicates that larger intervention programs directed at a larger population are more likely to be effective than smaller projects (Teague, 1999).

One such program was the UK-based program CC4G. This program was trialled in one school but as explained in Chapter 1 did not meet the needs of an Australian audience. It was clear that there was a need for a specifically designed sustained-intervention program to address factors identified as discouraging girls’ participation in IT and that this program needed to be appropriately and effectively evaluated so others could learn from it. The Digital Divas intervention program was brought into Victorian post-primary, middle-school classrooms to meet this challenge.

The Need for More and Better Evaluation

The international GASAT (Gender and Science and Technology) association was established in the early 1980s in response to concerns about issues of gender and science and technology (Harding, 1994). An analysis of the research papers related to gender and computing presented at eleven GASAT conferences held from 1981–2001 resulted in the research described by these papers being categorised according to: access to learning; process of learning; and outcomes of the teaching/learning process.

The majority of the papers presented focused on various dimensions of the ‘access’ of females to computers or ICT. Approximately half of this number addressed issues associated with the ‘processes’ of learning, but a much smaller number documented the ‘outcomes’ of learning other than those associated with subsequent progression to more advanced courses or to careers in computer science or ICT. (Parker, 2004, p. 5)

Parker (2004) promoted the notion that for social transformation to occur, reports written and presented at such conferences about intervention programs needed to be more precise. Similarly, other researchers have stated that to be influential in bringing about change policy-makers and practitioners require much more specific information:

An audit of Australian initiatives focusing specifically on females moving into non-traditional areas of work, including ICT, concluded that evaluation of existing or recent strategies ‘is generally lacking or piecemeal’ (Lyon, 2003, p. 3). One of the report’s five recommendations was that there needs to be a ‘greater emphasis on evaluation and any initiatives developed … need strategies attached to measure the progress of women and the impact of the initiative’ (Lyon, 2003, p. 4).

The research literature also revealed that most evaluations of intervention programs have involved exit surveys only and very few have assessed longer-term impact (Craig, Fisher, Lang & Forgasz, 2011). Reasons for this lack of evaluation of interventions aimed at women and IT have been postulated and include: if quantitative evaluations with no significant changes were found then they were not considered worth reporting (Teague, 1999); and deliberate decisions not to evaluate were possibly due to lack of resources or expertise, or perceived difficulty in doing so (von Hellens, Beekhuyzen & Nielsen, 2005).

With this knowledge the Digital Divas project team ensured that an evaluation dimension for the project was integrated into its implementation, as recommended by Darke, Clewell and Sevo (2002). Similarly, as recommended by Weiss (1998), an evaluation was also considered essential to improve understanding of elements of effective and successful interventions, and to share the results. For the Digital Divas program we adopted an evaluation framework specifically created for women and IT intervention programs (Craig, 2010).

Research Design

A mixed-methods research design was adopted, that is, quantitative and qualitative data were gathered in order to understand and explain the research problem (Creswell, 2008). The details of the research design are presented in Chapter 3. The evaluation of the Digital Divas intervention program was an integral element of the research design. Pre- and post-intervention surveys were administered to the students and the teachers in the Digital Divas classes; and pre- and post- intervention focus groups were conducted with students. Pre- and post-intervention interviews were conducted with teachers; and Expert Divas (female tertiary students enrolled in IT courses who visited classrooms and assisted teachers) were asked to provide reflective comments on their experiences. [Copies of all instruments used in the evaluation can be found in Appendices B, C, D and E.]

Theoretical Framework for the Evaluation Instruments: The Eccles’ Model

The goals of the evaluation of the Digital Divas program were closely aligned to the aims of the program itself, which were described in Chapter 1. We next describe the basis for our evaluation model and how it was designed.

The Eccles’ model underpinned the contents of the evaluation tools and is discussed first.

In her overview of the history of the model, Eccles (2005) summarised its theoretical basis as follows:

Drawing on work associated with decision making, achievement theory, and attribution theory, we hypothesized that educational, vocational, and avocational choices would be most directly related to individuals’ expectations for success and the importance or value they attach to the options they see as available. (p. 7)

In essence, it is an ‘expectancy-value’2 model that ascribes an individual’s academic enrolment decisions from the interaction of expectancy for success and the perceived value of the particular academic choice. Expectancy for success and perceived value will have been influenced by a range of associated and interacting factors including: the cultural milieu; the behaviours, self-concepts, attitudes and expectations of socialisers, and the individual’s interpretations of them; and the individual’s academic history and interpretations of it in terms of attributions for success and failure,3 self-concept of ability/achievement level, and perceptions of task difficulty. A schematic representation of the Eccles’ model is reproduced below (see Figure 2.1).

Eccles’ model provides insights into the stereotyped perceptions of IT that the Digital Divas program aimed to challenge. Eccles’ model illustrates the interplay between socialisation factors and their effects on career preferences and employment outcomes; in our case, the impact on choice to pursue further studies in IT and consider IT as a career path. In Figure 2.1 it can been seen that the Eccles’ model postulates that it is the cultural milieu (society, family, school, etc.) in which a girl finds herself, the beliefs and behaviours of members of this milieu, including their views of the place and role of women in IT, and the girl’s perceptions and interpretations of these views and behaviours, that contribute to the development of her personal goals and perceptions of ability in IT. Also contributing to her self-perceptions are her previous experiences of IT and prior achievement, which feed her memories and feelings about IT. Overlaying this is the perceived ‘utility and value’ (Eccles 2005, pp. 10–11) of participation in this non-traditional field for women, and the girl’s expectations of success in the field; these are mediated by the value associated with participation and the personal cost of doing so. In summary, the Eccles’ model reflects the influence of social and cultural capital (Bourdieu, 1977) on perceptions of self-efficacy (Bandura, 1997) and belonging.

The evaluation framework is described next.

Figure 2.1. Eccles et al. model of academic choice (2005 version p. 8) The arrows in the figure show which inicators inflluence each other and the direction of influence. The double-headed arrows signify that the indicators influence each other.

The Evaluation Framework

Program evaluation is defined as assessing social intervention programs and involves carefully collecting information about a program in order to make necessary decisions about the intervention (McNamara, 2007). A broad reason for conducting program evaluation suggested by Rossi et al. (1999) is to ‘provide valid findings about the effectiveness of social programs to those persons with responsibilities or interests related to their creation, continuation, or improvement’ (p. 3).

There is a large body of literature on evaluation models and frameworks to assist in the process of determining the success or otherwise of social programs. An evaluation framework will provide key stakeholders in social programs with the knowledge required to conduct useful evaluations. This will make it more likely that evaluation will take place, sound judgements can be drawn, and results will be disseminated.

Craig’s (2010) framework was specifically created for the evaluation of women and IT intervention programs, and as a chief investigator on the Digital Divas project, it was logical to implement this framework to guide the evaluation of the Digital Divas program. Craig’s framework (see Figure 2.2) was developed drawing on the literature relating to logic models (discussed below).

Figure 2.2. Digital Divas Evaluation Framework (Craig, Fisher & Dawson, 2011)

Table 2.1 is also based on the result of earlier work by Craig (2010) and proposes guidelines for evaluating intervention programmes.

Bickman (1987) describes a logic model as a ‘plausible, sensible model of how a programme is supposed to work’ (p.5). There is some argument that it is not possible to be sure that activity X causes output Y; however, Owen and Rogers (1999) suggest that program causality is central to this framework. They also suggest that there is no basis for developing intervention programs unless program initiators use causal thinking.

Phase Description
Evaluation Planning:
Why At the start of designing an intervention programme consider the need to undertake evaluation of the programme: Why evaluate the proposed programme?
Who Who requires the outcomes of any evaluation? Who will be the evaluation team? Will all stakeholders including multiple team members be involved?
Resources What resources are available for the evaluation (e.g., volunteers, expertise, time, money, equipment)?
Understand the Program:
Context What is the problem the programme sets out to solve? What change is expected after the implementation of the programme?
Create a logic model showing the programme inputs, activities and expected outcomes/impacts.
What are the assumptions (or theories) that will lead from the programme activities to the outcomes/impacts? Are the assumptions realistic? Over what time period are changes expected to occur? What level of change is required to define success?
Evaluation Design:
Design Describe how the evaluation will be conducted and the evaluation activities in the context of why the intervention programme is needed.
Design the evaluation activities. What is being measured, how and when? For example:

The participants – who will be involved.

The results of the programme: the short and medium term outcomes in participants’ knowledge, skills or behaviour.

The longer term outputs/impacts.

How the link between change and theory will be evaluatiod?

Evidence How will the data be analysed?
What evidence is needed and is it credible for assessing change?
Learn How will the results be used in future?
Share How will the lessons learned be shared?
How will evaluation results contribute to the gender and IT literature and theory?

Table 2.1. Guidelines for evaluating intervention programs (Craig, Fisher & Dawson, 2011)

A logic model can support planning, implementing, and evaluating programs and shows a theory of action (UWEX, 2003). It will show the logical relationships through the sequence leading from the resources invested in the program to the required results or impact. It can be shown with a graphical picture in a variety of ways, showing the logical connections between the program’s inputs, outputs, outcomes, and impacts.

The logic model designed for this project had guidelines (see Table 2.1) to support its implementation; however, effective evaluation cannot be guaranteed just by the implementation of any formal evaluation framework or model. Chen (1990) argued that the evaluation instruments (e.g., surveys or interview scripts) were critical. The questions used must be appropriately constructed so that they measure what they are intended to measure, are practical and provide useful answers. The adoption of guidelines does not alleviate the necessity to work within the constraints of a particular program, such as a lack of resources and time limitations, factors that commonly limit the ability to undertake effective evaluation.

Applying the Evaluation Framework to the Digital Divas Intervention

The three sections of the evaluation – planning, understanding the program, and evaluation design – and the components of each are discussed below in terms of how the evaluation was conducted.

1. Evaluation Planning

There are three key components in planning an evaluation: why, who, and resources. As there are many possible reasons why an evaluation may be conducted, it is necessary to understand what the motivation for a particular evaluation may be. For the Digital Divas program it was established that the purpose of the evaluation would be to check the program’s underlying theory of change, the change expected from the implementation of the intervention, and then to be able to inform the wider community under what conditions the intervention had the desired effects, and for whom. The stakeholders in the Digital Divas program included: the participating schools and students, the organisations who partnered with us in the research project, as well as the computing industry community. All members of the project team were involved in the design and analysis of the evaluation, with the majority of the implementation being carried out by two members of the team. The numerous resources required to undertake the evaluation – expertise, access, finance, and time – were included in the program budget.

2. Understanding the Program

The problem we set out to address was the lack of secondary school girls choosing to enrol into IT courses at university level. Therefore the goals of the Digital Divas program were to:

design materials that would engage girls as part of the middle-school curriculum;

raise awareness and ignite girls’ interest in IT and IT careers;

increase girls’ confidence and positive attitudes towards IT;

identify the factors that influence the program’s implementation; and

create (longer-term) sustainable programs for schools.

The expected change that would result from implementing the intervention program was an increase in the number of girls undertaking computing subjects in later years at the participating high schools (and then ultimately in higher education).

The way the program was expected to work, from the inputs required to run the activities to the generation of the anticipated outputs, and the embedded assumptions, were refined and confirmed with the project reference group made up from the major stakeholders in the program.

Assumptions of the Digital Divas Program

For the program to have the expected outcomes, we made a number of assumptions that underpinned the program:

1. The curriculum modules we designed would excite the participants, engender greater self-efficacy and deliver positive messages about IT.

2. By participating in Digital Divas the girls will acquire knowledge that challenges the prevailing stereotypes/myths within society that computing is boring, technical, involves working alone, and is the domain of males.

3. That a wide range of IT careers would be presented that involved analysis, development, programming, designing, and problem-solving aspects of IT, not just the use of computers as tools.

4. That the role models present accessible choices of computing careers.

5. That the teacher delivering the program would make explicit the links between the activities the girls were undertaking and their real world significance (e.g., Alice is not just a game but is an example of programming).

6. That any increase in motivation/enthusiasm for IT would be maintained from the time the Digital Divas program ended to the time the girls needed to make further subject selections.

7. That the wider community (school, parents, other teachers, and other students not participating in Digital Divas) would be supportive of the program and those participating in it.

8. That the students who selected IT-based subjects in later years at high school would be more likely to consider a higher education course in IT.

Inputs Required

To implement a successful program the following five key inputs were required:

1. A school that was supportive, had a champion for the program at the executive level, was prepared to be innovative and flexible with the timetable, and had appropriate IT resources. Appropriate technology was defined as being relatively up-to-date, reliable, and supported by good internet access.

2. A motivated, committed, creative, passionate teacher skilled in IT was required to run the program.

3. Undergraduate female IT students (Expert Divas), who were committed and motivated, had good IT skills, empathy with the students, and could be positive informal role models.

4. Guest speakers who were appropriate female role models, and

5. Appropriate curriculum modules that were engaging for participants.

Activities to be Conducted

The Digital Divas program was developed to be of at least one semester’s duration. It was designed to be delivered as an elective at a pertinent grade level and delivered in a single-sex class. It was not specified that the teacher needed to be female; however, this was a common occurrence in the program. At least one guest speaker was invited per semester to present their story to students, and an Expert Diva (a female undergraduate IT student) attended at least one classroom session every week.

Outputs Anticipated

The Digital Divas program activities were designed to generate the following short-term outputs for the participating girls:

excitement about the Digital Divas program, which translates into more excitement about IT;

a more positive attitude towards IT;

greater confidence using IT;

broader knowledge and awareness of IT careers;

an appreciation of the relevance of IT studies; and

to ensure IT is not discounted as a possible subject to be studied in future years.

In the medium term it was expected that the girls would seek further information about IT options and that they might consider selecting IT units at higher year levels within their school. The longer-term outputs of the Digital Divas program were expected to be that the program would be sustainable within the school and that the girls:

could envision themselves in IT careers;

were more confident and self-sufficient with technology; and

would consider selecting an appropriate IT higher education course.

3. Evaluation Design

Design

The framework and guidelines described above were used firstly as tools for the development of the program, the construction of the aims and goals underpinning it, and then to guide the design of the evaluation (data-gathering tools, and analyses), as well as the order of deciding what was to be measured and when it needed to be done.

Survey items and interview questions were developed by the evaluation team. The Eccles’ model of academic choice, previous research findings on gender and IT, modified versions of items from pre-existing instruments, and personal experience informed the development of the items and interview questions. [Copies of all survey instruments, focus group and interview protocols are provided in the Appendices to this book]

The curriculum materials developed for the Digital Divas program were designed and developed by one of the participating teachers who was employed as an educational developer for the duration of this task. Most of the curricula were trialled in this teacher’s class prior to implementation in the wider program.

Evidence: The Data Gathered

Appropriate and credible evidence to be gathered and analysed included the following:

From participating schools

To contextualise each school, information was collected on the size of school, gender-ratio, ethnic makeup, and type of school (government/Catholic/independent, co-educational/single-sex, year levels offered), socio-economic status of the families (ICSEA), and specialities of the school (e.g., music, science).

Past academic performance of students as measured by: VCE success, percentage moving to higher education, or employment.

What quantity of technology was available in the school (e.g., laboratory setting / laptop program, physical set-up of computer laboratories where appropriate).

Ratio of computers to students.

Technical support available in the school.

Perceived status of computing within the school, blocking of IT subjects, current school offerings of IT subjects, and crosscurricular IT activities at all levels.

Types of IT training and/or support programs offered to staff.

Each school’s history of girls selecting IT classes at senior level, and record of students going on to post-secondary IT courses.

Technology available for conducting the Digital Divas program.

Our impressions of the school context were confirmed in an interview with the Digital Divas teachers within the school.

From participating teachers

Surveys were completed by teachers prior to the commencement of the program covering their background in IT, their perception of computers in their school, their knowledge of the Digital Divas program, etc.

Surveys were completed by teachers again at the end of each semester regarding: any inhibitors or enablers for the delivery of the Digital Divas program; their perceptions of the girls’ engagement with the Digital Divas materials, the guest speakers, and value of the Expert Divas; what worked and what did not; and what improvements could be made.

Some of the teachers were interviewed at the end of each semester to gather more in-depth information on issues canvassed through the surveys.

From participating girls

Surveys were completed prior to and following the Digital Divas program to measure changes in attitude, confidence, motivation, perception of skills, engagement with IT, awareness of IT careers, and the identification of messages about IT that came through from the program.

Focus-group interviews, or individual interviews, were conducted with some participating girls at the end of the semester. Focus groups were also conducted twelve months later to collect reflections on the program and to look at the medium-term outcomes for the girls.

From the Expert Divas

Ongoing reflections by Expert Divas were collected at three points in time throughout the semester: weeks 3 and 12, and at end of semester.

Focus groups were conducted with all Expert Divas at the end of the semester regarding their interactions with the girls, the relationships they established with the girls, and the types of support the girls asked for; for example, did they ask about the Expert Divas’ university studies or ideal career, possible careers in IT, or about being a geek?

Also of interest from the Expert Divas’ perspectives were: what inhibited or enabled the delivery of the program; perceptions of the girls’ engagement with the Digital Divas material; what worked and what did not; what improvements could be made; whether they felt that they added value to the girls’ experience of Digital Divas (if so, in what way/s); and the type of support (if any) they provided the classroom teacher.

Other considerations

We were interested to know whether the wider community (parents, teachers, and students in the year level but not in the Digital Divas program) was supportive of the Digital Divas program and of the students involved. A PhD student was recruited to look into this area.

We considered it critically important to disseminate the findings of the Digital Divas program, including the negative aspects as well as the positive effects. We planned for the lessons learnt from the Digital Divas program to be shared within relevant IT and education research contexts, and with professional organisations. We were concerned that we could justify the interpretations, judgments, conclusions, and recommendations made in relation to the program. We wanted the results to inform new iterations of the Digital Divas program, and to be able to confidently use the results from the multiple implementations within different schools to inform and guide IT educational policy and teacher professional development.

Conclusion

In this chapter we have described the research background informing the evaluation framework adopted for the Digital Divas program, and the theoretical underpinnings that guided the development of the instruments used to undertake the evaluation. A mixed-methods research design for this dimension of the project was employed. From our knowledge and the literature it was apparent that few previous intervention programs, with similar goals to ours, had adopted such rigorous evaluation processes. It was our belief that the identification of those aspects of the program which worked best and met our anticipated goals, as well as what was less successful, provides better information for those planning future interventions. More specific recommendations for policy, classroom practice, and teacher professional development may emerge when the positive and negative aspects of the program are assessed and interpreted with integrity.

More details of the research approach used for the project are discussed in the next chapter.

2 Expectancy-value theory evolved from the early work of psychologists, including McClelland and Atkinson, who researched achievement motivation.

3 Bernard Weiner’s attribution theory involves three causal dimensions: locus of control (internal/external); stability (stable/unstable); control (controllable/uncontrollable).

Digital Divas

   by Julie Fisher, Catherine Lang, Annemieke Craig, Helen