• Tidak ada hasil yang ditemukan

Chapter 7 Discussions and conclusion

7.5 Further research

practical knowledge, shared assumptions and values; and expected outcomes). Doing evaluation (i.e., assessment) and the effect (of participation and use of results) were represented as Assessment and Use bundles respectively. Context, resources, activities, assumptions, and outcomes were examined.

As with mixed methods research, the SEMM research strategy (section 4.8) implied the results of the quantitative and qualitative phases were reported separately (see Chapters 5 and 6 respectively). These results were integrated at this stage to paint a complete picture and draw conclusions.

7.1.1 Framework for integration

Joint display devices (e.g., tables and figures) are effective for integration and further analysis, e.g., to bring together to compare, contrast, and identify patterns in data from various phases, which otherwise may not be evident (Fetters & Freshwater, 2015). Berman (2017) and Venkatesh et al. (2016) provide illustrations.

The purpose of this research was descriptive. The following research questions (specified in section 1.1) informed the integration and analysis:

RQ1: What are the factors which describe how evaluation of e-government outcomes is done?

RQ2: What are the patterns of performance among the different groups? Put differently,

How do different groups perform e-government outcomes evaluation”?

RQ3: How can e-government evaluation be improved?

Phase 1 (quantitative research) described participants’ characteristics and responses; the main factors which characterized how e-government outcomes evaluation is done (i.e., the elements of practice as object) and the different ways in which these factors combine during performance (i.e. patterns of practice as performance). Phase 2 (qualitative research) provided insight into the elements of practice and the performance (i.e., what is done and how).

Thus, consistent with practice research, the objective of this analysis was to “identify the components of practice and their configuration” (Welch, 2017, p. 7), i.e., elements and patterns of performance. Table 7.1 is the framework of analysis and summarizes the results of the respective phases.

Table 7.1: Integration framework for analysis: combines results of the quantitative and qualitative phases (Tables 5.29 and 6.14)

Preliminary quantitative result (N=106) Preliminary qualitative results (N=12) Descriptive Exploratory factor analysis and comparisons Cluster analysis Content analysis

Purpose of evaluation:

compliance, decision- making, not learning about e-government or the evaluation process Evaluation initiated from within

Evaluation teams consist of both internal and external stakeholders Main strategies: survey Main outcomes

measured: organizational performance and accountability Generally negative perceptions of EOEP

Differences across all factors, according to position, background, experience and knowledge, except Affective issues.

Positive perception of practice among Management, Evaluation, Over five years’

experience, and Above average knowledge In general, no difference between IT and Evaluation against Other; and also Above average against Average and Minimum experience

Cluster 3: IT and Evaluation managers with extensive experience (years and knowledge); Cluster 2: IT and

Evaluation non-managers with moderate experience; Cluster 1: Other non- managers with little experience Implications: clusters represent

differences in participants’ perceptions of how e-government outcomes evaluation is done according to their characteristics Positive perception of practice (highest mean) in Cluster 3

The cluster solution seems to support results from the comparisons of groups across factors

Organizational capacity: neither agreed or disagreed (on “Outcomes are regularly evaluated”; “Evaluation budget is adequate”

and “Management supports evaluation”

“Evaluation tools (e.g., software) are adequate”)

Inadequate internal organizational capacity to conduct and use evaluation:

Lack of internal expertise, dedicated budget; and management support throughout the evaluation process

There are differences in perceptions among groups by position and background

Evaluation framework: neither agreed or disagreed (on “The indicators measure both financial and non-financial outcomes”,

“Evaluation models (utilization, empowerment,

Policies (i.e., Batho Pele and NEPF) are important

etc.)” and “Batho Pele principles guide how

evaluation is done”) Organizational performance as a

predominant value

No significant differences among groups

Shared understanding: neither agreed or disagreed (on “Stakeholders agree on what to evaluate” and “Stakeholders agree on how to use results”)

Roles and responsibilities: background (and, expertise and it brings to the role) and preparation are important factors

Differences among groups (IT and Evaluation are likely to understand) Stakeholder participation: neither agreed or

disagreed (on “Stakeholders participate in decision making during evaluation” and

“Selection of stakeholders is transparent”)

Participation: background (and expertise therefrom) influences the extent of participation

Differences by background (Evaluation more likely to participate throughout both)

Learning and role as motivation for future participation

No differences among groups Use of results: neither agreed or disagreed (on

“Management produces strategy on how to implement recommendations”; “There is follow up to implement recommendations”; “How evaluation is done is reviewed regularly”)

Use of results: Some degree of use

Management response; distribution of reports

Recommendations may not be implemented

Differences by position and

management (on management response and implementation)

Affective issues: neither agreed or disagreed (on

“Participation increases stakeholders’

willingness to participate in future evaluations”

and “Participation increases stakeholders’

commitment to implement recommendation”)

Effect: Learning (i.e., conceptual) on both participants and their organization

Improved awareness, understanding and experience)

No differences among groups Learning is the motivation for future participation

7.1.2 The elements of e-government outcomes evaluation

The descriptive analysis showed the main purpose of evaluation was compliance and decision- making, survey was widely used to measure accountability and organizational performance outcomes, and evaluation was initiated by external entities (i.e., external demand for evaluation). The qualitative phase seemed to support this; i.e., organizational performance outcomes were dominant and recommendations were not implemented for improvement. The literature shows evaluation in the development context is for compliance (Chouinard &

Cousins, 2015), dominated by objective measurements, e.g., survey (section 2.4.6). Song and Letch (2014) found evidence of symbolic IT evaluation use (i.e., to comply with demand). In e-government, positivism is predominant (Hébert, 2013) and organizational performance, compared to democratic values, predominates (Sterrenberg, 2017).

Factor analysis identified inadequate organizational capacity; the qualitative results seemed to suggest inadequate internal organizational capacity. Evaluation in developing countries is characterized by a lack of resources (section 2.4.6). The differences in perceptions (in both quantitative and qualitative analyses) were expected, e.g., managers (c.f. non-managers) were likely to know about resourcing, a strategic function. Also, those from Evaluation (c.f. Other) were likely to know about internal expertise. Fierro and Christie (2017) found differences in evaluation capacity among stakeholder groups.

The descriptive analysis showed the main outcomes measured were organizational performance. Factor analysis identified Batho Pele principles and measurement of financial and non-financial outcomes were important in evaluation framework. The qualitative results showed policies (i.e., Batho Pele and NEPF) and organizational performance outcomes were important. Policies define roles and responsibilities, outcomes, acceptable actions, etc.—

consistent with the tenets of public value and public service values, evaluation framework, and practice theory (2.2.5, 2.4.2.6, and 3.2.4). The qualitative results showed no significant differences. This seems plausible: policies in the public sector are prescriptive and punitive and are likely to elicit shared practice.

Factor analysis showed understanding (of what to evaluate and how to use results) was important. The qualitative results suggest background and preparation enabled understanding of roles and responsibilities. Both quantitative and qualitative results established differences in perceptions. These are supported by literature: understanding involves knowledge about what to do and how and is facilitated by learning (Bittner & Leimeister, 2014). Section 3.2.4

showed roles and positions in organizations prescribe “a way of being” (Schatzki, 2017b, p.

36) and bear on knowledge, actions, etc. Evaluation frameworks prescribe functions to assure appropriate actions and outcomes (Arbour, 2020). Understanding, professional background, and preparation (e.g., training before evaluation commenced) are linked. For example, professional background may determine positions and roles, and responsibilities; preparation can promote, understanding, expertise, thereby, professional development.

Factor analysis identified participation in decision-making and stakeholder selection as important. The qualitative results showed participants were selected by professional background; the expertise this (i.e., background) brings to a role determined the extent of participation in decision-making and activities. The differences were expected: participants from evaluation background were likely to possess the necessary expertise and participate throughout. Chouinard and Cousins (2015) observed expertise determined the participants to select and the extent of participation. According to Fielding (2013) other than evaluators, stakeholders hardly participate beyond specifying the scope of an evaluation, due to lack of expertise. Learning and participants’ role seemed to influence future participation.

On use of results, factor analysis showed management response and follow-up to implement recommendations were important. The qualitative results suggested reports were considered by management and distributed to stakeholders, but recommendations were not implemented.

This supported the symbolic use discussed above. Consideration of reports is a form of learning (Alkin & King, 2017). However, the non-implementation of recommendations may suggest learning is not for improvement. This may be supported by the descriptive analysis: the purpose of evaluation was neither learning to improve e-government nor the evaluation process itself.

Inadequate internal organizational capacity may explain why recommendations may not be implemented (Gagnon et al., 2018). Decisions, e.g., to distribute reports, implement recommendations, etc., are management responsibility (Arbour, 2020). This may explain the differences between managers and non-managers.

The link between purpose and use was discussed above (e.g., section 2.5.8). The qualitative results showed the effect of evaluation on participants and their organizations was learning to improve knowledge (i.e., conceptual use); this also suggests learning was not for improvement.

Mayne (2014) found this (i.e., conceptual use, not improvement) was generally the case.

The quantitative results showed there were no differences on Affective issues (i.e., participation increases disposition to future evaluation and participation increases commitment

to implement recommendation). The qualitative showed the motivation for future participation was learning.

7.1.3 The patterns of performance

The quantitative comparisons showed differences in perceptions of what is EOEP. The distinct clusters also suggested differences in patterns of performance among the groups. The qualitative results seemed to confirm these differences. Therefore, it may be concluded there were differences among the participants on the elements of e-government outcomes evaluation and the patterns of performance (i.e., EOEP as both an entity and performance). The characteristics of participants, i.e., professional background, position, and experience (i.e., years of experience and knowledge) were important factors. Overall, participants had negative perceptions of their practice; however, IT and Evaluation managers with extensive experience were likely to have positive perceptions.

The difference in perceptions is an important topic in evaluation (Brandon & Fukunaga, 2014).

Discordance among groups may result from differences in everyday experiences (Fierro &

Christie, 2017); e.g., from resourcing, involvement, expectations, etc. Agreement may signify a shared view, evidence of an established evaluation culture (Preskill & Boyle, 2008b). Thus, this research leveraged the differences observed among groups (e.g., IT and Evaluation managers with expertise differed from the other groups) to develop a strategy for improvement.

7.1.4 Summary of findings

Sections 7.1.2 and 7.1.3 shed light on how e-government outcomes evaluation is done, i.e., the main elements and patterns of performance. Table 7.2 draws and juxtaposes conclusions against findings in the literature review (Chapters 2 and Chapter 3).

As aforementioned, mixed methods research enables further conclusions (i.e., meta-inference) otherwise not possible from either quantitative or qualitative research. The following overarching propositions are drawn from the conclusions:

1. There is inadequate internal organizational capacity to do e-government outcomes evaluation and use results

2. There is a lack of a shared view of how e-government outcomes evaluation is done 3. E-government outcomes evaluation is for compliance; learning is not for continuous

improvement

4. Organizational performance outcomes are evaluated while democratic and development (e.g., public participation, anti-corruption, etc.) are ignored