Introduction
Problem statement
The previous section points to two underdeveloped areas of research that need attention: the evaluation of e-government in developing countries and how to evaluate the results of e-government. RQ2 is about how different groups carry out their practice and can be reformulated as: "How different groups carry out the evaluation of e-government results".
Theoretical and philosophical perspective
The research problem, purpose, objectives, and questions were consistent with practice theory-informed research (Browne et al., 2014; Nicolini & Monteiro, 2017).
Research strategy
5 Research design Overall rationale for mixed methods Explanatory (mentioned in Summary, Introduction and section 4.8 Detailed description of appropriate. Overview of quantitative (section 4.8) Purpose: to describe practice Sampling: purposive, snowball Data collection: questionnaire Data analysis: descriptive , exploratory.
Research context and sample
Summary of results
Delimitation
Consistent with practice theory, the primary goal was to explore the elements of practice and performance patterns. The following elements of the EOEP were examined: context, resources, activities, theory (ie, underlying assumptions), and outcomes.
Relevance and contribution
Outline of thesis
It can be concluded that participants' perceptions of how e-government outcomes evaluation is done (i.e. the patterns of performance) differed according to their characteristics. P4 (Evaluation; Management; >5; Above average) for example referred to "the lack of evaluation skills internally".
Review of literature
Literature review strategy
Several typologies have been proposed; important factors to consider include the state of the problem area and the scope, breadth, and structure of the review.
E-government and public sector performance
- E-government
 - The public sector
 - Public sector reforms: an overview
 - Public value: voices in the field
 - Public service values
 - Public value and evaluation
 - Public value research: current issues
 - IT in the public sector
 
E-government in developing countries
- Development assistance and public sector reforms
 - E-government and development
 
Evaluation practice
- Practice
 - Evaluation
 - Evaluation as social practice
 - Evaluation quality: factors of influence
 - Research on evaluation practice
 - Evaluation practice in developing countries
 
Where appropriate, the umbrella definition of 'the use of IT in the public sector' is used as a placeholder for the previous definition. As a profession (i.e. specialty, discipline, etc.), evaluation denotes a branch of science, i.e. the science of valuation (Patton, 2018).
IT outcomes evaluation practice
- Types of IT evaluation
 - IT outcomes evaluation: challenges
 - Research on IT evaluation practice
 - Context
 - What is measured
 - Resources
 - Activities
 - Outcomes
 - Theory
 
Complementary factors can improve IT outcomes (section 2.5.6) and are therefore important to IT evaluation theory. Reviewed the literature to identify key features of IT outcomes and a system for conceptualizing IT evaluation.
E-government outcomes evaluation practice
- E-government outcomes evaluation: challenges
 - Context
 - What is measured
 - Resources
 - Activities
 - Outcomes
 - Theory
 - E-government evaluation practice in developing countries
 
From section 2.3, development (and therefore development evaluation) is central to the literature on e-government performance evaluation. This has spawned efforts to develop the public value literature for evaluating e-government outcomes in developing countries (Hiziroglu et al., 2017) – and as evidenced by the previous studies.
South Africa as context
- Public sector reform
 - Evaluation practice
 - E-government and evaluation practice
 
This made it possible to examine both the elements and stable patterns of performance (ie, "what" and "how") in evaluating e-government outcomes. All bundles (eg the constellation of all sectors) can be connected to form the constellation of e-government performance evaluations in the public sector. In accordance with the research objective, a two-phase sequential explanatory mixed methods research (quantitative followed by qualitative) was adopted - the latter to explain the results of the quantitative phase.
Methods (eg, research logic, sampling, and data collection and analysis) can be fully or partially integrated across components (Venkatesh et al., 2016). The main objective of this chapter was to understand the preliminary results of the quantitative phase, i.e. the elements of e-government results assessment practice and performance models.
Theoretical and conceptual frameworks
The role of theory in this research
Literature review, theoretical and conceptual frameworks and other elements (e.g. research objectives, questions, etc.) facilitate an understanding of what needs to be known and how, and are therefore fundamental to research design.
Practice theory
- Why practice theory?
 - What is practice theory?
 - The methodological debate
 - Schatzki’s site ontology
 - Implications for this research
 
Practice theory – also called social practice theory or practice theory – is an amalgam of theories that focus on practice as a central concept (Schatzki, 2017b; Spaargaren et al., 2016). Practice theory provides concepts and their relationships, vocabulary and grammar to represent the phenomenon under investigation as practice (Hui et al., 2017; Nicolini. As Nicolini observed, it is important to witness the “scene of action” (p. 27) to study practice, “ the practitioners' account is undesirable' (p. 29).
Thus, practice as situated means context or situation (ie, the state of the world within which the action is performed). Schatzki asserts the primacy of human over material entities—the reference to the practice evidenced by human activity.
Practice theory and IS practice-based research
Conceptual model
In general, evaluation includes evaluation and use (ie, the effect of participation, product use, or both). As indicated above, mixed methods research supports both quantitative and qualitative data and methods, and is appropriate to the purpose, research problem (ie, the nature of the practice), paradigm, and strategy. Thus, the steps were: identifying and describing different ways of evaluating e-government outcomes (ie, different performance models) and establishing relationships between clusters and participant demographics.
From the main proposition “There is no difference in perception among participants on EOEP”, the following was derived to explore and draw conclusions about the performance patterns according to participant demographics. These are forms of learning – the effect of doing evaluation, using results, or both (Alkin & King, 2016).
Research design
Elements of research design
Research planning “is the process of making all decisions related to research before it is conducted” (Blaikie, 2010).
Types of research by purpose
Philosophical assumptions in research
Research paradigms
- Positivism
 - Constructivism
 - Post-positivism
 - Implications for design
 
Research strategies
- Case study
 - Survey
 - Implications for design
 
Research methods
- Logic of inquiry
 - Sampling
 - Data collection
 - Data analysis
 - Validation
 - Quantitative, qualitative and mixed methods research
 - Features of mixed methods research
 
Time horizon
The nature of the problem, philosophical and practical considerations, etc. also influence the process (Saunders et al., 2015). Ontological realism holds that objects are real (i.e. tangible, consist of universals such as properties and relations) and exist by themselves, independent of human perception (i.e. the examiner and the objects of study are logically independent). Applied to this research, according to Schatzki's theory (see Section 3.2.4), practice is real with characteristics and can exist in different forms (ie, the elements combine to form different patterns of performance).
For example, from philosophical positions or paradigms (ie based on pure realism, pure anti-realism and the "middle ground") and appropriate research processes (ie the scientific or qualitative methods, not "the scientific method"), it is appropriate to prepare a detailed account of inductively generating such meaning.
Adopted strategy: sequential explanatory mixed methods (SEMM) survey
- Phase 1: Quantitative research
 - Phase 2: Qualitative research
 - Organization and presentation of research
 
Ethics
There is no standard structure; a suitable strategy is one that is consistent with the research design (i.e. the sequence of the phases) and supports the audience's understanding of this research. This research concerns public employees, i.e. the respondent; EOEP, i.e. object; and perception of EOEP, that is, the trait. Conversely, confirmatory factor analysis (CFA) applies a priori assumptions about the problem under investigation (eg, the exact number of factors) to establish the structure.
These are conceptually related to the use of the recommendations and thus the use of the evaluation results, i.e. the Use package. Evaluation framework: neither agreed nor disagreed (on "Indicators measure both financial and non-financial results".
Phase 1 (Quantitative data collection and analysis)
Sampling
Some authors (e.g., Bryman, 2016; Rowley, 2014; Saunders et al., 2015) argue that relying on one's network of contacts to select participants can improve response rates. Information technology (IT) managers in public organizations were more often involved in evaluating eGovernment performance and knew other employees with the required knowledge and experience. Thus, sample selection began with the researcher contacting a network of colleagues, friends and acquaintances to identify such IT managers.
Data collection
- Questionnaire construction
 - Questionnaire administration
 - Response rate
 
Data preparation
Analytical methods
- Descriptive analysis
 - Correlation
 - Comparison
 - Exploratory factor analysis (EFA)
 - Cluster analysis
 
Data analysis and preliminary results
- Descriptive analysis
 - Exploratory factor analysis
 - Further analysis of elements: comparing by group
 - Cluster analysis
 
It consists of the Assessment (ie carrying out the evaluation) and Usage (ie the effect of evaluation) bundles, which can share their elements. It is a popular data collection method because it is simple and practical; for example, the same scale and instruction may apply to. According to Chyung et al., well-informed participants are allowed to express a neutral opinion on a topic.
The communality (ie, the extent to which the elements are related to each other) should be higher (greater than 0.4) – evidence that the elements share a common domain and have little uniqueness. According to Hair et al. 2018), the effect of non-normality on a sample larger than 50 may be insignificant. Items that loaded less than 0.4 (cf. the recommended minimum absolute value of 0.3) were eliminated; those that did not influence any factor were eliminated and the procedure was repeated.
Therefore, it can be concluded that there were differences between the participants in the e-Government outcome assessment elements and performance models (ie, EOEP as an entity and performance).
Phase 2 (Qualitative data collection and analysis)
Ethics and quality issues
Sampling
Thus, the scope of the research defines the characteristics relevant to the investigation and, thus, the population and members to be selected. In contrast, non-probability sampling is appropriate for research that explores cases to understand the sample, not the population (Uprichard, 2013). Thus, the findings may provide useful information about the population, but cannot be generalized (Sekaran & Bougie, 2016).
Non-probability sampling is prevalent in social research - sampling frame and probability of elements in the population cannot be established, non-response bias, etc. As mentioned above, data source, sampling techniques and adequate sample size depend on the research purpose and characteristics of the population.
Data collection
- Interview protocol
 - Interview administration
 
Research can be classified by duration into cross-sectional and longitudinal (Bryman, 2016; Saunders et al., 2015). Exploratory techniques (e.g. factor analysis and cluster analysis) serve several purposes, e.g. to identify relationships, compare groups to detect differences, etc., in a data set (Hair et al., 2018). In general, pattern matrices are easier to interpret and therefore are investigated (Field, 2017; Hair et al., 2018).
Factor analysis followed by cluster analysis (called the factor-cluster approach) has found widespread use – factor analysis ensures unidimensionality (Hair et al., 2018). Data can be standardized to remove variation in cases where the variables are of different scales (Hair et al., 2018). A cluster is significant if it is greater than or equal to 10 percent of the sample size (Hair et al., 2018).
Interpretation involves examining cluster centroids (comparing the lowest and highest) and labeling the clusters (Hair et al., 2018).
Analytic technique
- Content analysis
 - Analysis strategy
 
Several decisions depend on the researcher's judgment and there are no criteria for determining quality; moreover, there are several research models – with different underlying paradigms, strategies and methods –. Addressing issues of ethics and quality in advance can improve the understanding of the research audience and increase transparency and credibility.
Data analysis and preliminary results
- Characteristics of participants
 - Organizational capacity
 - Policies and values
 - Participation
 - Roles and responsibilities
 - Use of results
 - Effect of participation
 
Data collection Choose an appropriate technique: Semi-structured interview Record data (participants' views) "as is". As can be observed, its main themes are: the factors that described e-government outcomes evaluation practice, the differences in pattern of performance, and participants' characteristics with which to examine such patterns. Consistent with the purpose of research and analysis (i.e., descriptive, manifest), an effort was made to present participants' views "as is."
There are no significant differences in the participants' perception of how e-government results are evaluated. The questions related to the professional background, position and experience of the participants (i.e. years of experience and self-assessed knowledge) in evaluating e-government outcomes.
Discussions and conclusion
Integrated results and discussion
- Framework for integration
 - The elements of e-government outcomes evaluation
 - The patterns of performance
 - Summary of findings
 
Recommendations for improving practice
Validation
- Validation of Phase 1
 - Validation of Phase 2
 - Validation of mixed methods
 - Limitations
 
Contribution
Further research
The conceptual model (see Figure 3.4) reframed how e-government performance evaluation is carried out as a bundle of practice-material arrangements in the public sector, namely an interaction between entities (people and other things, e.g. resources), activities and connections (e.g. The qualitative phase seemed to support this; i.e. organizational performance results were dominant and recommendations were not implemented for improvement. Factor analysis identified insufficient organizational capacity; the qualitative results seemed to indicate insufficient internal organizational capacity.
Qualitative results indicated that policy (ie, Batho Pele and NEPF) and organizational performance outcomes were important. Qualitative results indicated that the effect of the assessment on participants and their organizations was learning to improve knowledge (ie, conceptual use); this also suggests that the learning was not for improvement.
Summary