• Tidak ada hasil yang ditemukan

Phase II: Evaluation

3.5. Sampling Design

49

with the flows of the prototype (https://sites.google.com/view/ccamodel/home). The prototype design is an interface prototype and not a functional prototype. The interface prototype is discussed in Chapter 6 of the study.

Evaluation

This step involved an expert review technique including various stakeholders to evaluate the model and artefact (interface prototype) developed as a demonstration of a possible solution to the problem identified. The evaluation is undertaken in two iterations. Iteration I involved end- users and experts in the evaluation. The experts are qualified professionals in the organisation such as Information Security Managers, Information Security Administrators, IT Risk Analysis Officers, IT Response Team Members, and Information Security Auditors. Iteration II involved experts only as this phase focuses on the improvement of the conceptual model. The feedback provided by experts and end-users has prompted the improvement of the conceptual model in Iteration II.

Communication

The study findings will be published in the thesis and scholarly articles. The findings of the study will be forwarded to the participating organisations.

50

sampling methodology does introduce bias and decrease generalisability, however, the study included open-ended questions to assist in obtaining a nuanced picture of the subject domain.

The selection of organisations was from both government and private entities. The Information and Network Security Agency (INSA) is the sole security agency affiliated with the Ethiopian government and was also included in the study.

3.5.1. Sampling Design – Phase I

For the exploratory study (Phase I), a purposive sampling procedure was employed to select the participants from the targeted organisations. Six (6) organisations from Ethiopia were sampled. Large organisations were considered as they are more likely to have encountered information security incidents. Out of the identified organisations, 32 participants were included to be part of the study. A pilot test was employed for a group of information security experts (n=6) from each organisation to assess and validate the content validity of the interview guide. Only the most salient questions were piloted. Table 3-1 summarises the sampling design that was applied in Phase I.

Table 3-1: Sampling Design for the Exploratory Study (Phase I)

Participant No Percentage

Information Security Expert 7 22

Information Security Manager 6 19

Information Security Risk Analysis Officer 3 9

Information Security IT Auditing Officer 4 12

Operational Manager 5 16

End-User 7 22

Total 32 100%

3.5.2. Sampling Design – Phase II

For the evaluation of the model and prototype in Phase II, a purposive sampling strategy was applied to select organisations to be involved in the evaluation survey from various organisations (n=5) within Ethiopia. The set of participants involved in Phase II differed from the set of participants involved in Phase I. The study considered additional organisations within the information security domain in order to obtain a broader perspective. As the need for a new strategy for incident management would have more consideration in this developing context,

51

the sample framing included five organisations (2 government, 2 private and 1 security agency (INSA)). The selected organisations within the government, private and security sectors tend to have large investments in data centres and information security, which makes them more vulnerable to security incidents. Moreover, these are organisations that may be in the process of introducing incident management standards.

The evaluation of the model and prototype involved both information security experts and end- users. The aim of the evaluation is to obtain critical feedback of the acceptability of the proposal to assure the fitness of purpose of the model concept and interface prototype. The evaluation process was planned to be undertaken in two iterations: Iteration I and Iteration II. For Iteration I, from the organisations identified, the planned target populations of this research were information security experts (n=10) (i.e. information security auditors, information security managers, information security administrators, information security incident handlers, etc.) and end-users (n=30). Nielsen (2010) suggested that five participants are sufficient for discovering 85% of evaluation of system usability problems. The optimal sample sizes of ‘10±2’ can be used to a basic or general evaluation situation (Hwang & Salvendy, 2010). The limitation of 10 experts was deemed sufficient as this would lead to a more in-depth enquiry over the two iterations. Eisenhardt (1989) argued that for qualitative studies, theoretical saturation is reached when more cases add minimal value and specified that 4 – 10 cases may be sufficient as more cases may lead to additional complexity and copious data.

The sampling of participants per organisation for Iteration I is summarised in Table 3-2.

Table 3-2: Evaluation Sampling Plan Guideline for Iteration I (Phase II)

ORGANISATION Sector Sample

(Security Expert)

Sample (End- User)

Total

Organisation A2 Government 2 6 8

Organisation B2 Private 2 6 8

Organisation C2 Corporate by Government 2 6 8

Organisation D2 Private 2 6 8

Organisation E2 Security Agency 2 6 8

Total 10 30 40

52

This study involved an online survey, and there can be either probability or non-probability methods of accessing respondents (Couper, 2000). In Phase II, the sampling strategy uses a non-probability sampling technique to recruit the respondents from the organisations. Based on the number of respondents who are willing to participate, respondents may be selected randomly from the group depending on the size of the availability of experts or end-users.

Respondents who are involved in roles such as IT Security Manager, IT Security Administrator, IT Security Consultant, IT security incident response team member, IT Security Incident Manager, IT Security Auditor, IT Risk Analysis Officer etc. are considered as experts.

Iteration II only involved information security experts (n=10) who provided feedback on the improvement of the model concept based on the feedback from Iteration I. These experts were selected from the same pool of experts that were involved in Iteration I. The aim of Iteration II is to request further feedback on the improved conceptual model.