• Tidak ada hasil yang ditemukan

Prescriptive Study: Developing Design Support

5.7 Realisation

5.7.4 Tool Development

In many instances, the Actual Support takes the form of a computer tool. Tool development can take up a substantial portion of the researcher’s time. The software development methodologies in Appendix B.2 should be particularly useful to ensure that this time is used effectively. These methodologies have a number of well-defined stages that help the tool developer to:

• identify the interaction of the tool with the environment;

• clarify the interaction between its sub-systems; and

• gradually develop a clear picture of what form these sub-systems should take.

The main points are summarised in this section.

One useful feature of these methodologies is that they help develop the software such that the built-in functionality of the tool can be easily tested (Support Evaluation). However, these methodologies are intended mainly for large, commercial software systems where teams of software developers work together.

Support development effort in research is often much smaller in scale, and therefore the researcher should be careful not to get lost in the details of these methodologies. We find them useful to follow in so far as they are helpful in

+

reliability of embodiment

level of unity

+ +

+ +

support manual assessment

of C, S

knowledge of unity level provide early assessment

of U (existing)

+ + + +

++

+ +

+

+

support manual creation of

CI diagram

+

++

Key Factor

+ +

+ +

+ +

use of DfR methods

+

% of project time left

to improve

+ +

Upper part of IM (Fig. 4.9)

level of clarity

level of simplicity

quality of modification quality of

modification knowledge

of clarity level

knowledge of simplicity level

reliability of detail design

accuracy of CI-diagram

+ +

+ ?

+ +

reliability of embodiment

level of unity

+ +

+ +

support manual assessment

of C, S

knowledge of unity level provide early assessment

of U (existing)

+ + + +

++

+ +

+

+

support manual creation of

CI diagram

+

++

Key Factor

+ +

+ +

+ +

use of DfR methods

+

% of project time left

to improve

+ +

Upper part of IM (Fig. 4.9)

level of clarity

level of simplicity

quality of modification quality of

modification knowledge

of clarity level

knowledge of simplicity level

reliability of detail design

accuracy of CI-diagram

+ +

+ ?

+

developing the specification of the software at various stages of detail and in ensuring that these specifications are logically linked to one another.

In the context of knowledge based systems development, the Common KADS methodology (Schreiber et al. 2000) formulates a set of ‘How’ questions to guide the realisation phase, which can be useful for all tool development (see also Appendix B.2.5). These questions help clarify the technical system specification in terms of architecture, implementation platform, software modules, representational constructs and computational mechanisms required to realise the tool.

Raphael et al. (1999) propose in their Computer Aided Engineering (CAE) tool development methodology, that the main three issues to be addressed to realise a tool are:

• choosing representation(s);

• choosing methods (i.e., reasoning procedures);

• defining visualisation and distribution needs (i.e., user-interface needs).

We suggest adding to the latter the needs for introduction, installation, customisation, and maintenance.

We would like to highlight the CaeDRe methodology, developed by Bracewell et al. (Bracewell and Shea 2001; Bracewell et al. 2001; Langdon et al. 2001) as it supports the development of design tools in a research group setting and is based on our DRM philosophy. CaeDRe aims to provide a systematic process for producing evaluation-ready prototype systems targeted at improving design processes. The reasons behind the development of CaeDRe were that design researchers “need the necessary tool set and support to rapidly prototype research systems without being unnecessarily hindered by implementation details and fast changes in computer technology and standards”, and develop these research systems (which we call design support tools) such that they can be evaluated for their “capabilities and merit for fundamental research output beyond initial benchmark tests”. This should lead to the development of research systems that are

“both theoretically capable and suited to achieving these capabilities within varying design processes”, and “enable the development of useable computational design tools early on in research projects”, taking into account that design researchers often do not have the breadth of experience necessary for software development and are constrained by the limited time available for implementation.

CaeDRe unifies three complementary approaches: the Cae tool development methodology developed at EPFL (Raphael et al. 1999), the product platform concept used in industry, and the DRM described in this book. Figure 5.14 shows the methodology, clearly illustrating the link with DRM15 in the four stages, the need to use methods from the Social Sciences, as described in Chapter 4, and the need to start working on the Outline Evaluation Plan during the development of the design tool. CaeDRe adds methods from engineering software development and refines the PS stage specifically for tool development.

15 Note that the terminology of the stages of DRM has evolved since the publication of this article.

Figure 5.14 Tool Development Model from Bracewell et al. (2001)

The CaeDRe tool development process has five activities (see Appendix B.2.1 for details):

• Task definition.

• Choice of representations.

• Choice of methods.

• Definition of visualisation, interaction and distribution strategies.

• Theoretical and experimental validation.

Bracewell et al. (2001) add that requirements for testing and evaluation of the tool are a critical issue that needs to be taken into account while defining the visualisation and distribution needs, e.g., the requirements for data collection.

Before starting programming, it is often helpful to write the structure and expected behaviour of the computer program using a formal model. Several alternative ways of modelling are possible within the two major software development paradigms: function-oriented and object-oriented (see Appendix B.2.2). This not only prepares the programming, but allows the use of logic to pre- check the program.

Programming can be supported by CASE (computer-aided software engineering) tools, workbenches or environments (see Appendix B.2.6). Choosing the right software platform can sometimes be difficult. It may be helpful to go through the examples or tutorials provided by commercial platforms, to see if the application is suitable for the job at hand. Procuring a software platform can be expensive, although more and more freeware is available on the Internet. In practice, the choice of a software platform will be limited by constraints such as the

Prescription

Following Step Dependency Rapid Change

Decide Visualisation, Interaction

& Distribution Requirements

Description II Criteria

Description I

Define and Justify Measurable Criteria

Choose Knowledge Level Representations

Storyboard Tool in Context of New Design Process

Specify Experiment

and Data Collection Software

Analyse Data Model Existing

Design/ Process

Rapid Software Prototyping

Social Science Methodology Support

Engineering Software Development Support

Systems Analysis

User Interface Design Computer Science/A I

AnalysisData Method Selection Interview Techniques Observation Techniques MiningData

Experimental Techniques

Interpretation CSCW

Choose Knowledge Modelling Tools

Choose Methods Knowledge

Modelling

Rapid Software Prototyping Test by

Non-controlled Experiment

Test by Controlled Experiment Overall Success

Criteria

Evaluated Design Tool Choose

Development Tools and Components TheoreticallyTest

Choose Development

Tools and Components Hardware

Technology Software Technology

Choose Implementation

Level Representations

available hardware and software in the research group or the environment in which the Actual Support is to be evaluated.

During programming, it is natural and commonplace to make mistakes. In order to expedite error-free realisation of the tool, it is helpful to follow the popular design-test-debug cycle, where

• a provisional version of the program is implemented first;

• the program is checked for its required functionality;

• if the result is not satisfactory, errors are identified and the program modified;

• in the case of a more serious error in the algorithm itself, the concept is re- evaluated.

A software development methodology to speed up the design-test-debug cycle is called prototyping (Smith 1991) This methodology is especially useful in cases where it is hard to clarify what functionality may be useful or even necessary, or to evaluate a concept before it is elaborated, e.g., a concept for a user interface. This entails the development of a quick computer implementation of some initial ideas, have it evaluated by some potential users and modify it based on the feedback received. In terms of DRM, this implies quick iterations between the PS and DS-II stages. The added advantage is that computer implementation will force the researcher to discipline thoughts.

We end this section with a note of caution about user-interface design: often researchers spend unduly long hours programming and modifying user interfaces, even though they are not part of the core contributions. It is essential to stay focused and implement only what is absolutely necessary in order to evaluate the program’s functionality and impact. Because of the effect the interface can have on the use of the support and thus on its impact, it is important to do a pilot study, so as to identify and correct harmful side-effects of the user interface that can interfere with the desired functionality of the support. The pilot study should focus on obtaining specific feedback, rather than a general opinion, to allow effective modification of the user interface. For more details on user-interface design, see Appendix B.3.