Monash University Publishing | Contacts Page
Monash University Publishing: Advancing knowledge

Digital Divas

CHAPTER 3

RESEARCH DESIGN

Evaluating the Program

Overview

This chapter describes how the research was conducted and the data analysed. The approach we have taken underpins the results presented in later chapters. Our data collection included both qualitative and quantitative data employing a mixed-methods approach. We provide details on the data collection techniques employed and analysis of the data, as well as the ethical considerations when conducting research involving children. We also discuss our rationale for selecting these techniques.

Introduction

The first stage of our research was to build an evaluation framework. This framework was described in detail in Chapter 2. In this chapter we describe how the data were gathered and analysed based on the components described in our evaluation framework and our assumptions.

Our research was situated both within the information systems [IS] and education domains. Our methodological stance was interpretivist; that is, we focused on people and the ‘social world’ rather than the natural world (Cecez-Kecmanovic, 2011). We did not set out to test hypotheses or expect our findings to be replicable as is common with positivist research (Cecez-Kecmanovic, 2011; Shanks, 2002). As highlighted by Shanks (2002) in his description of positivist research, we were not independent of the phenomena being investigated; we were closely involved with the schools, the teachers, the Expert Divas, and the girls themselves. It can be argued that much of the research conducted in both IS and education has a focus on the ‘social world’. A reflective paper published by the then editors of the Information Systems Journal wrote: ‘Because we [Information Systems researchers] are concerned with people and organisations as well as technology, our research approach is rarely purely ‘scientific’. We cannot repeat our experiments unless they concern technology alone’ (Avison, Fitzerald, & Powell, 2001, p. 13). This was particularly true in our case.

Interpretive research approaches are particularly valid when looking at rich phenomena that cannot be easily described or explained by existing concepts or theories (Walsham, 1995). Our philosophy in designing our data-gathering was that one approach would not provide us with the depth of understanding we sought about the Digital Divas program as well as effectively evaluate the intervention. In the words of Miles and Huberman (1994), ‘we have to face the fact that numbers and words are both needed if we are to understand the world’ (p. 40).

Mixed-methods Research

Our research used a mixed-methods approach, which is one that gathered and analysed both statistical data and qualitative data (Venkatesh, Brown, & Bala, 2013). Mixed methods can help in answering complex questions where both qualitative and quantitative methods are required (Hesse-Biber & Johnson, 2013). A question such as why girls are or are not interested in studying IT is multifaceted and complex. We needed an approach that would allow us to explore our questions in-depth. Employing a mixed-methods approach can be challenging, given it involves integrating the findings from both qualitative and quantitative data in what Venkatesh et al. (2013) call ‘meta inferences’. What is important and challenging is ensuring the different approaches complement each other, validating the data-gathering methods used (Greene, Caracelli, & Graham, 1989; Miles & Huberman, 1994; Salomon, 1991; Venkatesh et al., 2013).

Approaches Used to Investigate Interventions

The techniques and tools we used to gather data are consistent with the approach taken to evaluate other intervention programs. The majority of programs we found used pre- and post- surveys, with some employing techniques such as interviews, focus groups, and observations.

The first reference we can find to an intervention program aimed at encouraging girls to consider computing was a paper describing a workshop for girls conducted at the University of Glasgow in 1987 (Watt, 1988). The workshops were described as ‘a great success’ and ‘a very worthwhile experience’ (Watt 1988, p. 114); however, the criteria used to measure ‘success’ were not reported.

Many short-term intervention programs to encourage girls to consider IT have used survey instruments to assess the outcomes (see for example (Buxton, 1992; Christie & Healy, 2004; Clark, Pickering, & Atkins, 1991; Clayton, Beekhuyzen, & Nielsen, 2012; Craig, Lang, & Fisher, 2008; Graham & Latulipe, 2003)). Different longer-term interventions in situ, such as Computer Clubs for Girls [CC4G], and the use of mentors and support networks for women studying IT, have used a range of evaluation methods; rarely, however, has more than one method per study been reported. Heo and Myrick (2009), who evaluated an out of school club, used pre- and post-interviews and surveys, as well as observations. Pre-and post-surveys were also administered by Doerschuk, Liu, and Mann (2007) to assess the effectiveness of a summer camp for girls. Longer-term interventions have generally involved a greater variety of data-collection techniques, for example Clayton and Lynch (2002) reported on a number of strategies implemented in Queensland, Australia, to increase the number of women studying IT. They also assessed the effectiveness of the programs by reviewing enrolment data in IT courses. Similarly, structured interviews, focus groups, and surveys were used by Staehr, Martin, and Byrne (2000/2001) to assess the effectiveness of a support program for female tertiary students.

Selection of Schools and Participants

The research was funded by the Australian Research Council (ARC) and industry partners [an ARC Linkage grant]. Such studies require the researchers to work with their industry partners in undertaking the research. The Victorian government department responsible for education, the Department of Education and Early Childhood Development [DEECD], and one of the participating schools, were two of the five industry partners and provided funding for the study as outlined in Chapter 1.

Selecting Schools

The support from the DEECD included distributing information about Digital Divas to schools within Victoria and inviting them to participate. This occurred at the start of 2010 and 2011. Schools contacted the researchers and were provided with further information. The information included details of how the program would be run, the support that would be provided, and our expectations for participation. The school principal had to agree to run Digital Divas as an all-girls class for at least one school term (approximately 10 weeks). Those schools formally agreeing to be involved sent one or two teachers, who would teach the program, to a training session. At the training session it was again reinforced that the program had to be run in an all-girl class. In a couple of cases it became evident that this was not possible and the schools withdrew from the study. All other schools agreed to participate and accepted the conditions we set.

In all, nine Victorian government schools agreed to participate; six were co-educational and three were single-sex schools. In the final year of the study, a single-sex independent school in Sydney, New South Wales, also ran the Digital Divas program.

Selecting Teachers

The selection of who would teach the Digital Divas program was made by each school. We did not specify the background or gender of who would teach the class; we were happy for the school to make the decision. Given the complexity of running a school and organising classes, it was clearly more appropriate to leave this in the hands of the schools. A female teacher was allocated the class in all but one school.

Selecting Girls

We did not suggest how schools should organise the Digital Divas classes. All schools decided to offer Digital Divas as an elective. In some cases the classes were run with girls from more than one year level in the same class. Most of the participating girls had self-selected to take Digital Divas. In some cases, however, the teachers told us that there were girls in their classes who had not been able to select any other elective and had ended up in the Digital Divas class or had been ‘forced’, for other reasons, to take the class; this, however, was rare.

Selecting Expert Divas

One of our key underpinning beliefs was that girls needed to be exposed to, and hear from, women studying IT at tertiary level (Chapter 5 describes the philosophy of the program in more detail). The Digital Divas program therefore involved a role model (Expert Diva) who attended one class each week. The female role models were drawn from currently enrolled tertiary IT students. We called for expressions of interest from female students studying IT and asked the women to complete a short survey as to why they might want to participate as Expert Divas. The potential Expert Divas were interviewed and those selected were matched with a school based on the school’s location and the timing of classes. The women selected as Expert Divas were paid for their time.

Techniques and Tools for Data-gathering

As mentioned in Chapter 2, the design of our evaluation framework, data-gathering and analysis were guided by the seven criteria needed for an evaluation to be credible identified by (Cohoon & Aspray, 2006, p.144). These criteria are:

Designing a convincing way of measuring outcomes

Ensuring a large data sample (more than 10)

Using measures that are appropriate

Including pre- and post- surveys, a control group or some form of comparison

Having more than one data-collection point

Using suitable analysis techniques

Ensuring research that has integrity

In this section we outline the different tools and techniques we used for gathering data and analysing the results. We have organised this around the different groups involved. There is little reported research on the views of others who were not the direct targets of the interventions (this is discussed in more detail in Chapter 7). Our primary data-gathering therefore included the girls, the teachers, the Expert Divas, and our own observations.

Copies of all the data-gathering instruments are found in Appendices B, C & D.

The Girls

Conducting any research on children requires consideration of a number of factors including the suitability of the data-gathering techniques to be used, ethical considerations, and how to ensure the validity of the gathered data (Parr, 2010). We took a ‘child-centred’ perspective (Christensen & Prout, 2002). We recognised the girls as people with their own views and biases. We also adopted Christensen and Prout’s (2002) ‘ethical symmetry’, that is, when a researcher uses the same approaches whether the participants are adults or children. There are no particular research methods that should be used, ‘[R]ather it means that the practices employed in the research have to be in line with children’s experiences, interests, values and everyday routines’ (Christensen & Prout, 2002, p. 482). Our university ethics requirements were that the girls, as well as their parents (or guardians), were required to give consent to participate and could withdraw at any time.

Data on the impact of the intervention on the girls were gathered using pre- and post-survey surveys, focus groups, and observations.

Pre- and Post-surveys

Consistent with similar studies, we used pre-and post-survey surveys to identify and explore girls’ views and perceptions of IT, IT careers, and stereotypes before and after they had experienced the Digital Divas program. We collected 265 pre- surveys and 199 post- surveys from the girls. Note, not all girls were available to complete the post-survey. The purpose of the surveys was to understand the extent to which the Digital Divas intervention had changed the girls’ perceptions of IT. Questions were drawn from a number of previously validated survey instruments (for example, see Murphy, Coover, & Owen, 1989; Fogarty, Cretchley, Harman, Ellerton, & Konki, 2001) and were consistent with pertinent elements of Eccles’ model of academic choice.

The pre-survey included the following:

Items requiring Yes/No responses covering questions relating to computers at home, and the girls’ use of computers and applications at home and at school.

Items with five-point Likert-type response formats
(1 = weak, 2 = below average, 3 = average, 4 = good, 5 = excellent) exploring girls’ computer self-efficacy.

Items with five-point Likert-type response formats
(1 = Strongly Disagree, 2 = Disagree, 3 = Not Sure, 4 = Agree, 5 = Strongly Agree) exploring a range of attitudes towards computing including: beliefs about teachers and parents, careers, boys’ and girls’ skills with computers, and gender-stereotyping of IT.

Open-ended items seeking opinions on careers in IT.

The request to draw a picture of someone in the computer industry using a computer.

The post-surveys included open-ended questions exploring the girls’ reactions to the Digital Divas intervention, in particular changes in their perceptions. Many of the items presented in the pre-survey were repeated so that we could conduct statistical tests to determine changes in perceptions and attitudes. The girls were asked to provide their names on the surveys so that it would be possible to compare pre- and post- responses; however, this did not always happen.

Both the pre- and post-surveys were completed in hard copy. In the first year of the study we asked teachers to have the girls complete the surveys and then send them to us. This proved problematic, with delays in returning surveys. We found that it was more effective to have a researcher oversee the administration and collection of the surveys. This approach gave us more control over the process, ensuring that the surveys were completed by all the girls participating in the research at the same time. Even though most pre-and post-surveys were completed in class time, for a range of reasons not all girls completed both surveys; in most cases this was due to absences on the day they were administered.

In line with good survey item design practice (Neuman, 2003, p. 68–287), the items were carefully designed to ensure that the girls would understand their meaning. The sequence in which the questions/ statements were presented was important. The survey was divided into sections. Demographic data was collected at the start. This was followed by items probing self-perceptions of computer competence and the girls’ use of computers at school and at home. A sequence of items related to attitudes towards, and the stereotyping of, computing, its study, and involvement in computer-related careers followed. The students were then asked to draw a person working in the computing industry, and describe what the person was doing. Finally, there were open-ended items seeking the girls’ views on what they would, and would not like, about working in the computer industry.

The surveys were designed to be scanned electronically to maximise accuracy of data entry. Copies of the pre- and post- student surveys are found in Appendix B.

Focus Groups

Focus groups are increasingly seen as a valuable tool in social science research, particularly when combined with other research techniques. Neuman (2003) describes a focus group as ‘a special qualitative research technique in which people are informally “interviewed” in a group discussion setting’ (p. 396). They are particularly useful when exploring people’s ideas and perceptions. The value of focus groups is that participants are exposed to the ideas of others in the group, often triggering new recollections and bringing forth new perspectives (Stahl, Tremblay, & LeRouge, 2011), and they can feel more empowered to speak than, for example, in individual interview settings (Neuman, 2003, p. 396). In our case, focus groups were a more effective mechanism for gathering qualitative data from girls than interviews. In small groups the girls were more likely to express their opinions than they might otherwise have been individually (Souza, Downey, & Byrne, 2012).

How a focus group is designed and managed is important, with a limited number of participants (Williamson, 2002c). Too many participants and some voices will not be heard; too few can result in insufficient discussion. Typically a focus group will consist of between six and twelve participants and last for between 60 and 90 minutes (Neuman, 2003, p. 396). Management of a focus group has to be undertaken carefully to ensure one or two members of the group don’t dominate the discussion and that everyone has an opportunity to contribute their views (Neuman, 2003; Williamson, 2002c, p. 396).

At each participating school we ran two or three pre- and post- focus groups with three to six girls in each group. A total of 31 pre- focus groups involving 134 girls and 30 post- focus groups involving 108 were conducted. Girls had to agree to participate during school time, therefore at least two focus groups were run to accommodate when the girls could participate. The focus group discussions were recorded and transcribed later.

Factors included in the Eccles’ model of academic choice shaped many of the issues explored in the focus groups. In the focus groups that were held prior to the girls undertaking the Digital Divas program we explored their current use of computers and applications at home and at school. The girls were encouraged to discuss their views on the use of computers by boys and girls and who was more likely to have a career in IT, thus triangulating the survey data and providing deeper insights.

The purpose of the focus groups after the girls had completed their Digital Divas class was to assess any changes in attitude, particularly changes in stereotypical views on women and IT, and if they were now more likely to consider a career in IT, again providing a deeper understanding.

To understand the extent to which the program had resulted in a change long-term, additional focus groups were held in four schools with 33 girls who had completed Digital Divas more than 12 months prior. One set of focus groups was held at one school with girls who had participated in Digital Divas two years after the program. Our goal had been to do this with girls from all schools. Unfortunately, despite our best efforts it proved difficult to track the girls; in some cases the teachers had moved on, in other cases the teachers did not recall which girls had been involved.

A copy of the focus group open-ended questions can be found in Appendix D.

Observations

All classes were visited weekly by an Expert Diva (female tertiary IT student). The Expert Divas provided weekly reflections on what was happening in the class, what the girls were working on, what they did in the class, and how engaged the Expert Diva thought the girls were with the work. Classes were also visited at least twice by a member of the research team and notes were written up after each visit. The observations provide us with another perspective on how the intervention was running, the quality and effectiveness of the modules of work, and the girls’ reactions to the program.

Teachers

As for the girls, we gathered data from teachers using pre- and post-surveys (18 pre- and 10 post- surveys were conducted), observations, and interviews at the beginning and the end of the program.

Pre- and Post-surveys

Surveys were completed by each teacher prior to teaching Digital Divas and again shortly after the program finished. The surveys were completed in hard copy and scanned for data entry. The teacher pre-survey was in sections and included the following:

About you: personal and demographic data were sought: sex, qualifications and teaching experience, self-efficacy with computers (Likert-type response format: 1=low to 5=high), and the year level/s involved in the Digital Divas intervention at the school.

Computers in your school: information was sought on the organisation of computers and IT offerings in the school.

Your views on girls, boys, and ICT: the teachers’ perceptions about the capabilities and attitudes of girls and boys with respect to IT. The teachers were also asked to explain their responses.

The Digital Divas program: The teachers were asked why they had agreed to teach in the Digital Divas program and what they thought success would look like.

The post-survey for teachers consisted primarily of free-text responses to questions relating to the Digital Divas intervention including: how students had been selected into the class; which teaching modules they had used with their student; and if their perceptions of the more and less successful aspects of the intervention, including if they believed it had been of benefit to the girls. In addition we asked them to comment on their Expert Diva and the role she played and what they might change in a future delivery of the intervention.

Copies of the teacher pre- and post- surveys can be found in Appendix C.

Interviews

Given the importance and influence of the classroom teacher on student attitudes, it was important to explore the teachers’ views more widely. Eleven pre- teacher interviews (note, not all teachers were available for the pre interview) and 13 post- interviews were conducted (note, some schools ran the program more than once with different teachers). The semi-structured pre-interview protocol focused broadly on how IT was viewed and taught within the school; issues around the implementation of the program; and how teachers thought girls might react. In the post-interviews we explored the impact of the Digital Divas program and how the program could be improved in future. Each interview lasted 30–40 minutes and was recorded.

Copies of the pre- and post-teacher interview open-ended questions can be found in Appendix C.

Observations and Informal Feedback

Members of the research team observed lessons at times convenient to the teacher. Informal feedback was simultaneously gathered from the teachers. Notes were kept for later use.

Expert Divas

Eight Expert Divas were appointed to the program, some were involved in more than one school. Each week the Expert Divas were requested to complete a weekly reflection in which they recorded their observations for the week. We asked the women to provide us with information on what had and had not worked in the lesson they attended, their contributions to the class (i.e. what they did), and how engaged they thought the girls were with the materials. They were also invited to make any other comments they deemed relevant.

The Expert Divas were interviewed at the end of their time in the role. We conducted 13 interviews (those Expert Divas who participated more than once were interviewed more than once). The semi-structured interviews explored their thoughts on how well Digital Divas had worked in the particular classroom and school they were in, and what they thought had and had not worked. They were also asked to reflect on their experience, what they liked and did not like, and what they had learned. The interviews took approximately 30 minutes and were recorded.

Schools

It was important to collect data about each school. The data came from two sources: publicly available information from the MySchool website (http://www.myschool.edu.au/), and specific information provided by the principal.

The information provided included:

the ethnic make-up of the school

the socio-economic level of the school

the sex breakdown of students in the school, including the number of girls taking IT subjects or electives at the different year levels

the number of students enrolled in Year 12 in that year (final year of schooling)

the school’s academic performance in: National Assessment Program – Literacy and Numeracy [NAPLAN] at Years 7 and 9; and Victorian Certificate of Education [VCE] achievements in the final year of schooling.

computer facilities in the school, including the ratio of students to computers

how computing was taught in the school, at which year levels, and if computing was compulsory at any level and/or offered as an elective

Chapter 4 provides more details on each of the schools and the statistics relating to the factors described above.

Visits to each of the schools also provided an opportunity to observe and note key features of the school environment and the classrooms.

It should be noted that focus groups were not conducted in one school. In two schools the class size was very small. Not all Expert Divas completed their weekly reflections.

Data Analysis

For the analysis of the quantifiable data included in the student and teacher surveys, we used the Statistical Package for the Social Sciences [SPSS], a software package commonly used for statistical analyses by social science researchers.

NVivo was used to code and then to assist with the analysis of all qualitative data including the open-ended responses in the surveys. We used a grounded-theory methodology, specifically that described by Bryant and Charmaz (2007) and Urquhart (2013), to categorise the responses. Saillard’s (2011) interpretation of this is that ‘Categories are grounded in the data thanks to the line-by-line coding but still a category is part of the analytic thinking of the researcher. Raising a code to a category is an analytic process.’ Our research began with defined assumptions around which the survey, interviews and focus group questions were built. We therefore had some ideas of the likely categories that guided the initial coding decisions; we then allowed the data to ‘speak for themselves’.

Analysis of Quantitative Data Using SPSS

To meet the goals of the study and answer the research questions, descriptive and inferential statistics were used in the analyses of the quantitative data. As appropriate, mean scores, modal scores, and/or frequency counts were calculated to describe the data sets.

Depending on the response format type, parametric or non-parametric statistical tests were conducted to compare response patterns. Independent groups’ t-tests and analyses of variance (ANOVAs) were used when the response data were interval and we were interested to know if there was a statistically significant difference in mean scores. Chi-squared tests were used when data were categorical or ordinal to identify if response distributions differed. Some Pearson bivariate correlations were also conducted to explore the relationships between particular variables (items) of interest. As is common in the social sciences, a p-value of .05 was the level of statistical significance adopted. If responses were found to be statistically significantly different, the meaningfulness of the difference was also examined.

Paired t-tests were used to compare the mean scores on identical items on the pre- and post- surveys to determine if there had been a change in students’ or teachers’ measured response to these items (e.g. attitudes or beliefs) as a consequence of the Digital Divas intervention. It should be noted that paired t-tests can only be conducted on the responses of those participants who complete both the pre- and the post-surveys. Thus the number of responses analysed using paired t-tests was less than the number of participants completing the pre-survey, and also less than the number completing the post-surveys as some students completing one or other of the surveys did not complete the other.

Analysis of Qualitative data using NVivo

NVivo is a well-recognised tool for analysing qualitative data (Davidson, 2012; Gibbs, 2002, p. xxiii). It facilitates, in a systematic way, the coding of a range of data types that can later be analysed. In contrasting quantitative and qualitative data analysis, Gibbs (2002, pp. 10–11) argues that qualitative analysis involves researchers managing their data as they move between data sets and compare data. Data, apart from text, could consist of codes, memos, annotations, etc. Davidson (2012) describes five ‘interlocking stages’ when considering qualitative data and technology. These stages are ‘creating data; organising data; primary responses; secondary responses; curation’ (p. 4).

Organising the data involved identifying which interviews and which free-text questions from the survey data we would code. These were brought into NVivo for coding. The team had to agree on the high-level nodes and sub-nodes we would use (see Figure 3.1, next page). The decisions were based on the research questions and assumptions detailed in Chapter 2.

Figure 3.1 Screen shot of nodes and sub-nodes4

We agreed to create a different set of nodes for data from the girls as distinct from data from the teachers and the Expert Divas. We then looked at mechanisms for linking and analysing the data we had and to consider ways of visualising the outcomes (See Figure 3.2).

Figure 3.2 Nodes for coding teacher and Expert Diva data

Critical to the validity of the data analysis process is how the coding is managed and how the data are interrogated (Welsh, 2002). As described by Ho and Frampton (2010), the reliability of the coding can be assessed by examining the consistency with which team members code the same text. Three of the researchers coded all the data. After discussion, agreement was reached on the nodes to be used. It is recommended that a code book be written, which we did: it detailed the nodes, the meaning we gave to each and examples of text coded to that node. Table 3.1 (next page) is a segment of the code book.

Three of the researchers independently coded the same three interviews. A coding comparison query was run to establish the inter-coder reliability (Gibbs, 2002; QSR, 2013, pp. 236–237). The results indicated that we had more than 90% agreement on the codes. During the coding process each researcher kept a diary in which reflections on the data, questions and suggested changes to nodes were made. Any recommendations for changes were discussed by the team.

Code Definition Example
Attitudes and experiences (aggregate) Positive - Negative Neutral - Mixed Mixed was where phrase included both negative and positive comment Positive: ‘Like it’s a good thing to learn stuff’
Negative: ‘Too complicated’
Neutral: ‘Depends on what you’re doing’ Mixed: ‘Think the teacher should teach us more about programs we would use’
Awareness and interest
• IT career awareness (pre)

• IT Career (post-)
In IT – general knowledge of IT and of women in IT.
IT careers

Select as a future career Knowledge of technology Level of knowledge of type of careers. Possible career, wider understanding of what an IT Career was
‘There are lots of women in IT’
‘realised IT is not just programming’ [will you do IT] ‘no, I have higher goals’
‘to like repair computers and stuff’
‘IT certainly did broaden our horizons a bit.’
‘Before Digital Divas, we would have thought, IT, no. I might go over and do something else. Now, after it, we’ve gone, “Hey, that’s actually not a bad idea.”’

Table 3.1 Example from the code book

To interrogate the data, queries were run within NVivo. These included:

Matrix coding queries enable a cross-tabulation of selected items. For example, to explore if the curriculum excited girls’ interest a matrix code query was run looking at the node ‘Attitudes and experiences’ (positive, negative, mixed and neutral responses) and the node ‘DD_Program’.

Word frequency queries allowed us to identify the most frequently used words including at the node level. For example, a word frequency was conducted on the node IT Self Efficacy and identified that the word ‘confidence’ was the most frequently used word.

Text search queries were undertaken to identify individual occurrences of words. For example, how often in the word ‘women’ was used in the post- survey responses.

It is also possible to interrogate data at the individual node level. For example, to understand the impact speakers had on the girls from all the schools we explored the text coded to the node ‘Influencing factors’ and sub node ‘Speakers’. This enabled us to identify a theme or a pattern by counting, for example, the number of positive or negative responses (Miles and Huberman 1994, p. 253).

Research Design and the Evaluation Framework

As discussed and described in Chapter 2, we believed an evaluation framework was important for us to be able to measure the impact of the Digital Divas program. An important component of any evaluation framework is the assumptions on which the program is built. Table 3.2 briefly describes our assumptions and how the outcomes of these assumptions were evaluated.

Assumption Mechanism for evaluating outcomes
The curriculum modules we designed would excite the participants, engender greater selfefficacy and deliver positive messages about IT. Pre- and post- surveys with girls and teachers, observations, focus groups, teacher interviews and Expert Diva reflections and comments.
By participating in Digital Divas the girls will acquire knowledge that challenge the prevailing stereotypes/myths within society that computing is boring, technical, involves working alone, and is the domain of males. Pre- and post- surveys with girls and teachers, observations, teacher interviews and focus groups.
That a wide range of IT careers would be presented that involved analysis, development, programming, designing, and problem-solving aspects of IT, not just the use of computers as tools. Design of the modules and value of the speakers in schools based on feedback from interviews with teachers.
That the role models present accessible choices of computing careers. Interviews with teachers and Expert Divas.
That the teacher delivering the program makes explicit the links between the activities the girls are undertaking and their real-world significance and why that is important. Post- surveys with teachers, interviews with teachers, observations and reflections of Expert Divas.
That any increase in motivation/enthusiasm for IT is maintained until the girls need to make further subject selections. Focus groups 12 months and longer after girls have completed the program.
The wider community are supportive of the program and those participating in it. Outcomes of the research conducted by the PhD student.
Girls who undertake IT-based subjects in later years at high school will be more likely to consider a higher education course in IT. Not possible to establish because the project ended before such evaluation could be made.

Table 3.2 Assumptions and evaluation

Chapter summary

It is not enough to implement a program or service – it is imperative we know if it has made a difference. (Slatter, 2003 p. iii)

Without conducting a well-considered and well-designed evaluation of Digital Divas, we would not have been in a position to draw meaningful conclusions. Our evaluation framework served as an important guide and reminder, ensuring that the outcomes of the assumptions we made could be evaluated. Using software tools such as SPSS and NVivo enabled us to organise our data effectively and analyse it carefully. In this chapter we have provided a detailed discussion of the research approach we took for data collection and analysis guided by the seven evaluation criteria described by Cohoon and Aspray (2006).

We used the qualitative data to further explain and enrich our understanding of the quantitative results. We are confident that the approach we took provides valid and reliable results from which conclusions could be drawn.

4 NVivo screen shots reproduced with permission from QSR International.

Digital Divas

   by Julie Fisher, Catherine Lang, Annemieke Craig, Helen