2.4 Agile Software Development
2.4.3 Extreme Programming (XP) Methodology
regarding these requirements documentation techniques is that user stories are lightweight in the sense that it captures a minimal set of requirements that become the focus of a single iteration of an agile based software project. In order to contextualise the iterative techniques used by agile methods, an overview discussion of these methods will be presented in the subsequent sections. The discussion will be structured around the listing of agile methods presented in Abbas et al. (2008) and Dingsøyr et al. (2012).
Beck’s structuring of XP entails a transformation of the conventional software process into a ‘sideways’ orientation, where the focus is prioritisation of quick coding and testing rather than developing for the future. In order to achieve this strategy of blending the software development activities into smaller iterations, Beck proposed a set of major practices that have to be followed to facilitate compliance with XP methodology. The main practices of the XP methodology are summarised in Table 2.3.
Table 2.3: Core Principles of XP (Adapted from (Beck, 1999))
Planning game
Customers determine the most valuable features that they want prioritised; these features are documented as user stories that contain
specifications regarding the scope and timing of the release of the feature; each feature release is
regarded as an iteration/small release of the XP process model; The planning game is a subtle reference to the interactivity between the “business people”/customer and the “technical
people”/programmer.
Metaphor
Each project is guided by a single overarching metaphor/ a story that provides a user friendly/non- technical reference for the basic elements of the system; a piece of system jargon that enables all system stakeholders to identify with the overall Figure 2.9: The Evolution to XP (Beck, 1999)
a reference to technically oriented terminology such as system architecture (Grinyer, 2007)
Simple Design and refactoring
There is no big design up front; the designs are very much focused on individual user stories; the overall system design evolves into a final design via a process of continuous refactoring
(restructuring/optimising system code without changing its behaviour).
Tests
XP is regarded as a test driven methodology (TDD);
Programmers and customers compile a set of tests as part of a user story document; this is done before coding.
Pair Programming All production code is written by two programmers in a single location using a single machine.
On-site customer A customer sits with the team full-time
Continuous integration New code is integrated with the evolving system within a short space of time; any new code that compromises the systems’ ability to pass the set of pre-defined tests is discarded.
Table 2.3 has made liberal reference to the entities, customer and programmer. In order to establish a bit of convention and provide some clarity with regards to the use of terminology pertaining to XP methodology, Lindstrom and Jeffries (2004) explained that the main role players are the customer and the programmer. The customer is a business representative who provides details regarding the system’s requirements and the expected business value. Around these requirements and business value specifications, a set of test cases are developed to enable the system delivers the expected functionality. The programmer is a member of the technical team assembled to implement the customer’s requirements and develop the software system. The major contributions of XP are centered on shorter development cycles, the recommendation of using an evolutionary design approach (as opposed to the ‘big design up front’ used in traditional methodologies), an emphasis on continuous testing and integration, the invocation of a pair programming strategy and the
requirement of having an on-site customer. The essence of XP methodology is simplicity in terms of planning, design, programming, testing and feedback (Lindstrom & Jeffries, 2004). There is a high focus on interactivity with the customer because the customer is responsible for prescribing the acceptance tests and then evaluating the software to ascertain if it delivers the intended business value. Each iteration of an XP cycle produces a working version of the software that is evaluated by the customer. Hence, there is a high priority attached to visibility of the software thereby enhancing the prospect of customer feedback.
This is different from traditional software development methodology where the customer involvement is restricted to specific phases of development and the actual system is delivered completely at the end of the development cycle.
While the XP methodology has received many endorsements, Beck (1999) did concede that it was not “…a finished, polished idea” (p. 77) and it is ideally suited to small, medium sized systems where requirements are not concise and were likely to change during the course of development.
Empirical Deliberations Involving XP Methodology
An objective indictment on the effectiveness/success of XP was not easy to acquire because many of the reports in this regard have been based on anecdotal evidence (Abrahamsson & Koskela, 2004; Layman et al., 2004). According to Abrahamsson and Koskela (2004) and Williams et al. (2004), this situation was not entirely unexpected and while empirical evidence is valued, much of the decision making regarding software development within the practitioner community did not have empirical justification. This observation resonates with the assertion by Glazer et al. (2008) that although it was clear, to many businesses and software engineers, that the XP attributes that prioritised rigorous customer interactions and frequent delivery of software enabled the production of superior software, this claim was not based on accurate empirical evidence. This situation ‘opened’ up the methodology to criticism for lacking in its ability to deliver quality software systems.
In an effort to address this situation, Abrahamsson and Koskela (2004) conducted a controlled case study on XP in a practical setting that entailed
development of a system for managing research data. The development team consisted of 4 developers and the user base was in excess of 300 users. The large user base meant that there would be varying expectations of the system and as such, the developers were provided with an incomplete set of user requirements that would be modified on the basis of continual user interaction with the system.
The overriding objective of the study was to set a benchmark for the performance of XP on the basis of empirical data that provided an indication of the success of core aspects of XP methodology. This would serve as a point of reference that researchers and practitioners could use in their analysis of XP methodology both holistically and also with respect to the specific practices that underpin the methodology. This strategy resonates with the suggestion by Erickson et al. (2005) that the main XP methods have to be studied separately to determine whether each of these methods achieve the expected levels of success.
XP and the On-site Customer
One of the significant findings of the Abrahamsson and Koskela (2004) study was that direct customer did not play a significant role in the success of the system. This outcome is commensurate with the results of a similar study by Rumpe and Schröder (2014). The suggestion that the presence of an on-site customer is not pivotal to the success of XP is contrary to the dictates of methodology as suggested in Beck (1999). However, on closer analysis, Abrahamsson and Kosokela do concede that the development team appreciated the convenience of having an on-site customer for quick system reviews, the development of user acceptance tests as well as to provide a tokenistic presence to instil a sense of sense of urgency and commitment to the development effort. Also, user involvement in the systems development effort is positively correlated with end user acceptance of the system (Bano & Zowghi, 2013; Kujala, 2003;
Kundalram, 2013; Williams et al., 2004). The Rumpe and Schroder study that entailed a survey of 45 software practitioners, was not conclusive in this regard.
The majority of the survey responses indicated that the presence of an on-site customer would have been preferred, but it was mostly logistical problems that prevented the dedicated involvement of an on-site customer. There were also
reports of instances where the on-site customer was not competent enough to contribute towards the writing of accurate user stories or the generation of adequate test criteria to ensure that valid tests were conducted on the evolving system. In these instances, the system development effort was delayed and lead to much frustration on the part of the developers as well as the on-site customer.
Hence, the dedicated presence of an on-site customer would be ideal, provided the customer is familiar with the user story concept and also has the competency to write valid test criteria that could guide the system development effort in the right direction. This ideal scenario may not be always feasible, thereby compromising the integrity of the methodology. However, a compromise situation that entails sufficient involvement of the on-site customer to provide only overview detail with regards to the compilation of user stories and the generation of test conditions seems to be the most plausible resolution to the dilemma of on-site customer involvement.
XP and Design, Testing and Code Refactoring
As indicated in Table 2.3, XP is regarded as test driven development (TDD) where each feature of the system is coupled with a series of predefined tests that are compiled by the programmers and the on-site customer. It is reported in Causevic et al. (2011) and Layman et al. (2004) that TDD consists of an iterative cycle of test, development and refactoring of code with the objective of ensuring that all test cases are passed. The significant aspect of TDD is that it minimises the need for a comprehensive system design phase. The system design evolves on the basis of interaction with the on-site customer. This feature of XP may be deemed to be rather controversial within in the annals of conventional systems analysis and design literature. Historically, a comprehensive system design phase, referred to as the big design up front (BDUF), has always been part of the systems development process. However, the XP methodology with its TDD approach adopts a minimalist approach to design, thereby rendering the methodology to be in conflict with the BDUF strategy.
BDUF vs TDD
The dilemma arising out of XP’s deviation from the BDUF convention is whether the lack of a comprehensive system design in favour of the iterative TDD approach will compromise system quality. In order to shed some light on this dilemma, Layman et al. (2004) conducted a case study at IBM involving the development of device driver software. The development of the device driver software was done using the conventional BDUF approach and at a later stage, an updated version of the software was developed using TDD. In both instances, UML was used to create designs of the system. However, while a comprehensive design model was developed for the older system, the newer system had a scaled down design model that was accompanied by a set of predefined acceptance tests. A comparative analysis on the number of defects that were identified in the code for both the older and newer versions of the device driver software was conducted and a significant outcome was that the newer system using the TDD approach had 40%
fewer code defects reported. While this outcome seems to suggest that TDD is a superior methodology, Laymen et al. do acknowledge that the limitations of the case study approach (such as the lack of external validity and the inability to produce statistically significant results) may justify an element of caution when making generalisations on the basis of such a study.
However, a case study can provide valuable insights into the adoption and effectiveness of new technology or practice (Layman et al., 2004). The insight into the benefits of TDD alluded to in the Laymen et al. study is however reinforced in a systematic literature review of empirical studies on TDD undertaken by Causevic et al. (2011). The review identified and provided a summative report on the outcome of 48 empirical studies where TDD was the main focus. The most significant outcome of this review was that TDD had a significant positive effect on code quality. This conclusion was based on reports of a lowering in the code defect density11 once TDD was used. Also, many of the studies reported on the positive perception of software practitioners towards TDD.
11 A software defect is a generic term for a fault, failure or error in a software product (Schach, 2008, p. 50)
Whilst the apparent benefits of TDD seems to suggest an improved software process, Causevic et al. warn about the limiting factors that may have a moderating effect on the claimed benefits of using TDD. A significant moderating factor that will hinder widespread adoption of TDD is the lack of experience or knowledge in the use of the methodology. This lack of expertise will have a negative impact on the code quality as well as contribute towards a less than optimal return with regards to time and budgetary constraints. Another constraint is the reports of unscheduled increases in the development time (confirmed in a previous study by George and Williams (2004)). This is attributed to the time incurred to implement a set of requirements, attempt to ensure that the acceptance tests are met and engage in code refactoring so that there is an improvement in the code quality.
The code refactoring activity, an intrinsic part of TDD, may also introduce regression faults that make it necessary to repeat all of the acceptance tests subsequent to any change in the code base, thereby increasing the development time. Depending on the organisational context, development time may be regarded as a critical factor in enabling business value (Causevic et al., 2011). If a project is not completed within a given time, then it impacts negatively on the business value (Alsultanny & Wohaishi, 2009), thereby compromising the viability of the project and the methodology used to develop the project. In a study by Kim et al. (2012) the issue of code refactoring was examined in a case study of the Windows operating system at Microsoft. The study entailed a quantitative component where 1290 software engineers at Microsoft were surveyed as well as a qualitative component that entailed semi-structured interviews with 6 engineers who were assigned the task of refactoring the Windows 7 operating system. In the quantitative part of the study, developers were asked to critically analyse the concept of code refactoring. The reported benefits of code refactoring were improved readability of the code, improved maintainability, a lower defect rate and better extensibility of the system. The reported risks associated with code refactoring entailed the generation of regression faults and the time taken to conduct code refactoring. The significant outcome from the interviews was that
code refactoring provided an opportunity to add business value to the system. In the Windows case, this was done by customising the code to make it compatible with different execution environments. From an overview perspective, it may be concluded that the time overhead incurred by these code refactoring efforts may well be mitigated by the increased maintainability that is incorporated into the system, thereby saving on additional development costs. Hence, based on the evidence presented, the XP philosophy of intensive refactoring throughout the project (Kim et al., 2012) may be pivotal in improving software quality at a cost that may be repaid by virtue of a reduced maintenance overhead (which according to Schach (2008, p. 13) consumes approximately 75% of the cost of software development).
While the lack of experience in the use of TDD and the time overhead have been flagged as criteria that may impede the widespread adoption of XP, Causevic et al. (2011) also examined the strategy of adopting a minimalist approach towards an upfront system design. This strategy, which heralds a significant departure from traditional software development ideology, has also received much attention in Breivold et al. (2010) and Mishra and Mishra (2011). In each of these studies, it is reported that the lack of a BDUF approach is not seen as a hindrance or a limiting factor in the quest to develop quality software. However, Causevic et al.
do caution that there are studies where the lack of a comprehensive design phase particularly for larger, complex systems has had a negative impact on the quality of the system. Hence, there is no definitive indictment on whether the lack of a comprehensive upfront design is beneficial or detrimental to the quality of a software system. Breivold et al. (2010) is of the opinion that this aspect of agile methodology should be the focus of further empirical inquisition. In a subsequent study, McHugh et al. (2012) conducted a survey of 20 medical device software organisations. Fifteen of these organisations had opted for a plan driven, prescriptive software process model where there is a large emphasis on upfront planning and design. It is claimed that such an approach provides the stability and point of reference for a software project that serves a ‘mission critical’ purpose.
This sentiment is endorsed by Meyer (2014, p. 13) who is of the opinion that the
agile-like stance of rejecting extensive upfront planning and design is
“irresponsible” and does not auger well for the sustainability of agile methodology.
The User Story as a Proxy for BDUF
Whilst this opinion resonates well with the dictates of traditional software engineering practice, the McHugh et al. study did reveal that the software practitioners were of the opinion that user stories are an adequate form of upfront planning and provide the necessary stability that a distinct design phase would provide for the traditional methodologies such as the Waterfall approach. These observations give rise to a paradoxical situation where the academic fraternity is wary of diminishing the relevance of a comprehensive upfront design effort, while the practitioner community is gravitating towards a strategy that entails diminishing of the overheads that would be incurred if too much time and effort is spent on the analysis and design phase of the development lifecycle.
As a concluding observation regarding the design issue, the lack of a comprehensive upfront design effort may not necessarily hinder the software process. In some instances, where the system requirements may be deemed to be volatile, the XP methodology consisting of user stories, TDD and code refactoring may be ideal. However, in other instances where the system is deemed to be complex or it serves a ‘mission critical’ purpose, the more prescriptive methodologies such as the Waterfall model with a BDUF focus will be preferred.
XP Methodology and Pair Programming
Another aspect of contention regarding XP is the programmer-centric nature of the methodology. XP is not reliant on expert contributions in the areas of systems analysis and design (Crawford et al., 2013). Most of the analysis, design and coding is done by two programmers who work together on the same programming task using one computer and one keyboard (Dick & Zarnett, 2002;
Hannay et al., 2010). Programmers work together in pairs and develop simple designs that represent a high-level abstraction of the system. XP methodology entails the development of code using pair programming as well as rigorous testing of the code until it conforms to a set of acceptance tests that have been specified upfront, in collaboration with the business stakeholder. While the minimalist