18
DEVELOPMENT OF APPRAISAL SYSTEM TO IMPROVE MILITARY OFFICERS PERFORMANCE ASSESSMENT PROCESS
Wahyudi Agustiono1
1 University of Trunojoyo Madura
Jl. Raya Telang PO BOX 2 Kamal, Bangkalan, Indonesia Telp : (031) 3011146, Fax : (031) 3011146
ABSTRACT
This paper at hand is aimed at describing the experience of developing a decision support system used for assessing employee performance. Taking into account the distinctive nature of the military organization that is rigorous, traditional, hierarchical, controlled, bureaucratic, and conservative; this study shows the complexity and challenges when it comes to design and developing the appraisal system. In view of the unique nature of the military organization in performing the appraisal process, this study sought various methods and techniques that could guide the development process. The experimental results suggest how the waterfall model was useful to guide the development process. In developing the appraisal system, this study shows how the Simple Multi- Attribute Rating Technique (SMART) and Representation Operation Memory and Control aids (ROMC) are useful conception. Finally, this study concludes how the selection appropriate approach, method and techniques contribute is critical to successfully the development of appraisal system, especially in a military organization that is considered as unique.
Keywords: Decision Support System, Development of appraisal system, Military Organisation
1. INTRODUCTION.
Employee or staff performance appraisal is
“the process of identifying, evaluating and developing the work performance of the employee in the organization, so that organizational goals and objectives are effectively achieved while, at the same time, benefiting employees in terms of recognition, receiving feedback, and offering career guidance” (Lansbury, 1988, p.1). It became popular especially right after the industrial revolution where many companies needed to measure the daily performances of their factory workers precisely as the basis for instance giving weekly bonuses. In more advanced, this performance appraisal would be also used to make an evaluation between different profiles and employee achievements.
Later, by early 1950s after its popularity and the spread used in the industrial sector, employee performance appraisal was then accepted practices for many organizations (Murphy and Cleveland, 1995) including in military institution. Especially for military organizations, performance appraisal is generally used to evaluate officers' competencies required for a certain position. Given the rigid standard used in military appraisal system such as character, the complexity of an organization, group or individual duty and its division (Chang, Cheng et al., 2007), the accuracy of measurement has become increasingly important. Therefore, a critical issue in a military organization is how to perform a transparent and fair officers' performance appraisal process to support decision making pertaining to promotions and operations.
International Journal of ASRO Volume 9, Number 2, pp. 18-32 July-December2018
19 Nevertheless, it could involve a complex
thinking logic and may be uneasy to bring all the assessment processes correctly so that the results reflect actual officers' competencies. Even if the processes can be done, it may take an extra time due to the excessive criteria needed to be assessed. In some cases where all the processes run manually and the assessment involves experts judgment combined with qualitative criteria which are often imprecisely defined, the results unlikely free from bias and may be incorrect. For instance, appraisals from two or more examiners are often resulting from different outcomes and usually are not free from subjective opinion at the time of the assessment.
All the challenges, discussed above, in turn suggest that the use of decision support system for increasing the quality of the performance appraisal process is critical. While the literature on development performance appraisal system is prevalent, however, given the unique nature of criteria assessment which is highly influenced by the organizational context, there is a need for more study within different organization setting.
Especially in the military organizations which are different in policies and assessments procedures from county to country and yet the appraisal process becoming more detailed and complex, more studies in this area are required.
Therefore, considering both the calls for more study on this area and the unique nature of assessments procedures, this research aims to develop a decision system to help address the complexity in military officers' performance appraisal. More specifically, this research at hand attempts to use Simple Multiple Attribute Rating Technique (SMART) to develop the thinking logic of the decision system that would make the appraised results more transparent, logical and precise.
Although SMART has been long adopted for developing decision support systems (Chou and
Chang, 2008, Risawandi, 2016, Siregar, Arisandi et al., 2017, Valiris, Chytas et al., 2005), nonetheless there is little evidence that this technique has been applied in officers' performance appraisal especially in military organization setting.
This paper is organized into six sections.
Following this introduction section, the second section reviews the literature to gain insights into the current research on employee or staff appraisal system development and to argue there is currently little research in the military organization context.
Section 3 describes and justifies the experimental procedures used in this research to design and develop an officer appraisal system. In general, a classical model of software development approach is adopted as a development method. In section 4, the development of an appraisal system in the context of a military organization is also described.
Finally, Section 5 discusses the conclusion as well as the paper's contributions, limitations of the study, and future research opportunities.
2. MATERIALS/METHODOLOGY.
In this section, the literature exploring the design of an appraisal system for assessing employee performance is we synthesized to identify the current knowledge known in this area. This section also argues there is a gap in knowledge exploring the development of a decision system especially used in the context of military officer performance appraisal system that is filled by this study.
2.1. Decision Support System
For the last few decades, Decision Support System (DSS) has been the subject of study by both academia and practitioners especially after organizations started to computerize their operational aspects including decision making. Historically, according to McCosh (2004), DSS of was first coined in 1965 when Michael Scott
20 Morton introduced his Ph.D. work on “Using a
computer to support the decision-making of a manager”. But the first academic paper introduced the term “decision support systems” was written by (Gorry and Morton, 1989).
Soon after the introduction of DSS, there is a large volume of published studies can be found in the literature reporting the development of various ranges of DSSs from time to time. Due to its infancy, the majority of early research on the DSS development was focused on experimental, simulation and model-driven which could be used to improve the effectiveness of decision making process and offer better quality results. For instance (Courtney Jr and Jensen, 1981) developed a system for teaching DSS in school some researchers (Gorry and Morton, 1989, Morton and McCosh, 1968, Morton, 1980) how the analytical models they proposed could help managers make a recurring key business planning decision (Bournaris and Papathanasiou, 2011, Valiris, Chytas et al., 2005).
Study on DSS development theme continued to progress and reached its peak in the 1990s during the revolution of personal computer and organizations in all sectors were starting to computerize various personalized works especially in decision making. It can be noted from the literature;
the emphasis of the research during this time was to develop DSS similar as other personal computing tools but specifically used to support any managerial activity in decision making known as Personal DSS (PDSS).
Therefore, despite technical aspects, another major research found in the literature is how to develop PDSS to be more simple, practical and easy to use. This is because PDSSs
were mostly developed for use by all people within an organization especially managerial people who might be not considered as technologically-savvy.
Over time, the need to make group decision making is getting increased to make consensus and to improve the quality of the decision. To address this demand, researchers have attempted to discover ways to extend the PDSS into Group DSS (GDSS) which supports collaborative work in the decision making process and at the same time help reduce barriers in a group meeting.
Review on the literature indicates that GSS researchers have investigated various design aspects such as group size, technical components available, organizations environment and the nature of decision process which determined how a GDSS should be developed to suit with the organizations goals.
While DSS rooted in the Information Systems and computer science disciplines, however in practices, many of these are mainly developed and used to support and improve managerial decision making in various sectors. Therefore it is not surprising, a considerable amount of studies exist in the literature describing the experience of developing DSS to help both public and private organizations make strategic policies and support decision making processes faster. For instance, many authors have investigated the development of DSS within government agencies to enhance and support the existing public services (Morton, 1980). Compared to the private sector, where DSS has been developed for a long time and used as the way to incorporate information from different sources to make sound informed decision making. One area which
21 has received a large attention from the
researchers is development DSS for improving quality of employees or staffs performance assessment. The next section therefore, synthesizes the existing studies relating to the development of DSS used in performance appraisal to highlight what is currently known as well the gap from which this research aims to address.
2.2. DSS for the Performance appraisal system
There exist various terms used to describe the notion of DSS for performance appraisal system such as "performance evaluation system" or just "performance appraisal system". This study in particular refers to this type of DSS as "performance appraisal system". However, study on
"performance appraisal system" can be also found in other area such psychology, and human resources management which dealing with process of evaluation, framework and process guiding appraisal process using traditional paper-and-pencil (P&P) approach (Rasheed, Yousaf et al., 2011, Roberts, 2003, Soltani, Van der Meer et al., 2006, Tuytens and Devos, 2012). Research into this topic has a long history and there is an abundant amount of works published in the literature.
This study in the other hand is mainly looking at DSS for "performance appraisal system" that is a computerized system (desktop, web-based, mobile, groupware or cloud-based applications) designed to enable and help a managerial team improve the process of individual's performance evaluation in an organization. Therefore, given the research aim mentioned above, the review is only focused to synthesize articles
on the development of computerized performance appraisal system instead of 'traditional P&P appraisal system. Review of the literature showed there is a large volume of published studies reporting the development of various tools and systems which can be regarded as DSS used as employee performance appraisal system.
Examples included online performance evaluation system (Arreola, 2007, Neary, 2002), electronic human resource management (eHRM) (Jackson, Chuang et al., 2006), Employee performance evaluation (Ahmed, Sultana et al., 2013, Gui, Hu et al., 2014)
Similar to the P&P appraisal system, many attempts have been also made by researchers to study the development of DDS for employee appraisal systems. A considerable amount of study for instance (Klein, Snell et al., 1987) has been carried out to develop various computer based appraisal systems adapted from management, psychology and various assessment techniques or methods. In the same vein, many researchers in computer science and soft computing have proposed and demonstrated various useful algorithms and optimisation techniques such Fuzzy, AHP and multi-factorial model (Ahmed, Sultana et al., 2013) (Manoharan, Muralidharan et al., 2011, Yee and Chen, 2009) to solve problem and complexity in performance appraisal systems. This is because the appraisal process often involves an evaluation of multiple and even conflicting criteria before a decision can be made.
Apparently, much of the studies only provided model instead of working application that would be ready for actual use.
22 Considering the cost-effectiveness,
accuracy, and advantages, many organizations have started adopting the computerized appraisal systems in practices since centuries. Therefore, there is an enormous empirical study published in the literature reporting the development of such computerized systems in different sectors.
For instance, Islam and bin Mohd Rasad (2006) reported the development of appraisal system to evaluate employees performances based on quantity/quality of the work, planning/organization, initiative/commitment, teamwork/cooperation, communication and external factors in a supplier company. In addition, some authors developed tools that can be used to assess employee performance in healthcare (Lee, 2016, Osman, Berbary et al., 2011), manufacturing (Ozkan, Keskin et al., 2014) and education (Neogi, Mondal et al., 2011, Raoudha, El Mouloudi et al., 2012) sectors. Monika and Mariana (2015) proposed appraisal model for controlling employee's performance in an IT company. All these examples suggested that the appraisal system has successfully gained considerable interest from organizations.
More and more organizations likely start migrating their P&P or traditional appraisal process to computerized appraisal system.
One of the organizations that are trying to adopt an appraisal system to enable their human resources evaluation more efficient is military institutions. Accordingly, it is necessary for the military organizations to develop such a system to automate their traditional ways of doing performance appraisal process and to gain competitive benefits. Therefore, the next sub-section reviews the literature the development of various appraisal systems within the military
institution to gain insights in the current body of knowledge in this area as well as identify gap from which this study aimed to address.
2.3. Military officer performance appraisal system
In view of the significance of performance appraisal systems almost in an organization today, military institutions have started to develop such system to make their officers appraisal process better. While the development of an appraisal system in a military context is increasing, but the current state of the literature suggests that more research is still needed. This is because review indicated that there are relatively few empirical studies published in the literature reporting the development of appraisal system for use in military organizations.
One exception (Chang, Cheng et al., 2007) found in the literature reporting the development of computer-based group decision support system in the military organization aimed to provide more transparent information and help manager to make better performance evaluation.
However, the research paid much attention to develop a model that simulates the thinking logic of the examiner makes performance assessment and decision. One of the main issues with this approach is lack of understanding of the technical and non- technical aspects of appraisal system development. In addition, the research only tests the model using numeric simulation and did not take into account user involvement in validating the system. Therefore, it is not yet known from the study whether the system fits with user's needs. This is important because like many other DSS, developing systems used for the manager who is mostly non-
23 conversant users required an extensive
understanding of non-technical aspects such as user preferences, background, and usability issues.
Despite its shortcomings, the authors did not make clear the challenges during the development of such an appraisal system.
This is because the military organizations have distinctive working culture such as traditional, hierarchical, controlled, bureaucratic, and conservative (Burr, 1998).
In addition, the military organizations are generally known as rigidity compared to the non-military organization, especially when it comes to assessing their officers' performance. And thus developing such appraisal system in the context of the military institution is much more complex and uneasy compared to non-military organizations.
Considering all the evidence from the review above to date, it seems that there is limited knowledge can be extracted from literature relating to the development of computerized performance appraisal system in the military organization context. In the other words, it highlights the opportunity for this study to contribute to the current literature by describing the process along with the challenges encountered during the development of computerized systems used in military organization for assessing their staffs. This research problem in particular emphasizes that an exploratory study in a military organization contest is needed which provide empirical evidence on the use of approach, method, techniques, and tools for appraisal system development. Therefore, the next sub-section presents the research approach used in this study in order to address the above research problem.
2.3 Research Approach
This study in particular attempts to develop a DSS used an appraisal system.
However, the limited empirical works found in the literature, as discussed above, also raises the question of what system development approach can help this study develop an appraisal system used in the military organization. This is because most of the authors in their studies cited in the previous section paid limited account on the approach they used when developing the appraisal system. Therefore due to the limited information can be extracted from the literature, it is necessary for this study to find a best-fit development approach which could help address the research problem above.
There exist a number of Development Life Cycle (SDLC) models which is suitable for developing a DSS. One alternative may suit meeting the development of appraisal system in a military setting with its traditional, hierarchical, controlled, bureaucratic, and conservative (Burr, 1998) nature is the waterfall model. The model is considered an appropriate approach because of its rigidity in the design steps where each of the phases should be carefully done sequentially which are similar to the working culture in the military organization. In addition given the standardized and rarely changed rules of staff performance appraisal in the military suggests that waterfall is suitable for the development approach. Figure 1 illustrates the waterfall model and how it was applied to guide this study develop a performance appraisal system used in the military organization.
24 Fig. 1. Waterfall Model
3. RESULTS AND DISCUSSION.
This section shows how the waterfall model employed during this study as a way to develop a performance appraisal system in a military organization setting. Nonetheless, in reporting the results, the design and implementation phases that were done sequentially or right one another were reported in one section for effectiveness. In addition, since this study only focused on design and development, the maintenance phase is not discussed. Before presenting the results, this section briefly described the case study of the development of an officers' performance appraisal system in a military organization. This is important to provide a context where, why and how such a system is needed.
4.1. Case Study Overview
Naval Base XYZ (pseudonym) is one of the Indonesian naval bases and operates under the Eastern Fleet. The naval base is mainly responsible for the military operation and law enforcement at sea especially in the eastern territory of Indonesia and the management of resources and port authorities. It also performs various technical and operational activities such as operate, maintain and repair vessels or equipment as well as strategic or managerial works. For
these purposes, the naval base is required to conduct a specific role related to assessing, selecting and deploying or assigning its personnel for each task or operation in a thorough and careful process.
Nonetheless, the challenge with this role is how to conduct rigorous, fair and effective performance evaluation procedures that could identify the most suitable officer for certain assignment or position. In addition, the methods employed in the assessment should acknowledge all the selection criteria and not bias especially when the appraisal involves judgment from experts combined with qualitative criteria which are often imprecisely defined. The difficulty is aggravated by the fact that each criterion has different measurement and then should be normalized into a one to five scales (outstanding, excellent, satisfactory, Average and below) as shown in Table 1 below.
Table 1. Assessment criteria
No Criteria (C) Outstanding Excellent Satisfactory Average Below 1 Morality (C1)
2 Discipline (C2) 3 Military attitude
(C3) 4 Loyalty (C4) 5 Initiatives (C5) 6 Working ethic
(C6) 7 Teamwork (C7) 8 Toughness (C8) 9 Achievements
(C9) 10 Responsiveness
(C10) 11 Self motivation
(C11) 12 Dignity (C12) 13 Understanding
(C13) 14 Social attitude
(C14) 15 Responsibility
(C15) 16 Physical
condition (C16) 17 Healthy aspects
(C17)
The difficulty in conducting such thorough evaluation is likely to multiply as
25 one of the cornerstones of military personnel
performance appraisal is the need for gaining rigid results, and yet faster. This is because one of the working cultures in the military organization is that every personnel is required to thrive at any mission, operation or assignment within a rapid and steady clip.
Given the current advancement of information technology, this in particular raises a question how a computerized system can be designed to address the problems above.
At the moment there has been a system in a place used in the Naval Base XYZ to keep information about the personnel.
However, the system is not comparable to an appraisal system which could assist the commanders to make assessment and promotion recommendations based on the categories defined above. In addition, the system is unable to provide the commanders insights into an individual staff or only a certain percentage of groups of staffs in the naval base who are eligible for a certain position or assignment. Considering all of the challenges and limitation on the existing system, it becomes necessary for the Naval Base XYZ to initiate the development of computerized appraisal. The next sub-section describes how the development process using the waterfall model (see Figure 1) proposed in the previous section.
4.2. Requirement Phase
This phase is the first step to develop the appraisal system that aimed to identify the users' requirements and needs as the basis for developing the appropriate system.
A number of techniques used during this phase to collect the required information including a study on existing documents,
reports and related secondary data on the appraisal procedures used in the Naval Base XYZ. A series of surveys were also distributed to some key persons in the personnel management as the intended users to gain their opinion on what features or functionalities an appraisal system should have. Finally, those managerial people were then invited in interview sessions to confirm the information they provided in the questionnaires.
All the data gathered in this phase were then analyzed using content analysis and transferred into use cases. The developer team also run discussion to refine the initial requirements and finally stored into requirement document. This document described the modules, functionalities, and capabilities the proposed appraisal system should have or cater based on the information provided by the intended users as shown in Fig 2. The developer team then discussed the documents with the intended users to check whether the proposed functionalities meet their needs and could make the appraisal process in the Naval Base XYZ better. This discussion also aimed to identify the emerging needs and to clarify the unclear information. Once the users agreed and there was no other requirement, the document was then approved and used in the next development step.
26 Fig. 2 The proposed modules, functionalities and
capabilities of the appraisal system 4.3. Design and Implementation Phase
This phase aimed to create a conceptual design that illustrated how the system's legacy and artifact would look like.
Using common DSS components suggested by (Efraim, Jay et al., 2011), the proposed system has consisted three main components: Appraisal databases, Appraisal Model and user interface as illustrated in Fig 3. The first component is the appraisal database that was built based on the data personnel and data criteria as discussed earlier in Fig 2.
Fig. 3 Proposed architecture of performance appraisal system
The second part is the appraisal model. As the appraisal system was intended for use to make assessment and recommendation, the developer team also needed to design the thinking logic that would handle multi criteria decision making.
For this purpose, a multi-criteria decision analysis technique pioneered by (Edwards, 1977) called Simple Multi-Attribute Rating Technique (SMART) was adopted as a method for appraisal of different criteria. This
is because the technique has been widely adopted and useful especially for formulating DSS designed to deal with assessing both qualitative and quantitative criteria or even cannot be quantified. Accordingly, researchers have used this technique to develop DSS for use in many areas (Chou and Chang, 2008, Hsu, Goh et al., 2012, Taylor and Love, 2014). The following is the step how SMART was implemented as the logical thinking of the proposed appraisal system as informed by the previous work (Olson, 1996):
1. Identify the decision makers and problem to be solved. In this research, the decision problem is assessing officers' performance in Naval Base XYZ against evaluation criteria (see Table 1) that will be used by commanders to make selection and promotion recommendations on new rank, assignment or mission.
2. Identify the alternatives. The outcome of the appraisal system is used for recommendation of rank promotion and selection for a particular assignment.
3. Identify criteria to be evaluated. The evaluation criteria used for decision making is based on standards (C1-C17) provided by the Naval Base XYZ (see Table 1).
4. Assign scores to each criterion to measure the performance of the alternative on that criteria. As mentioned earlier, in developing the appraisal system for use in Naval Base XYZ each criterion has different measurement and then should be normalized into a one to five scales (outstanding, excellent, satisfactory, Average and below) as shown in Table 1.
5. Calculate the weight of each criterion.
For each criterion, this study used the 1-100
27 interval where 100 is the most important and
1 is the least important criteria.
6. Calculate the normalization of each of the criteria (nwj) using the following formula
𝑛𝑤𝑗= 𝑤𝑗
∑𝑘𝑛=1𝑤𝑛(1)
Where wj is the weight value of criteria j While Σ wn is the total weight of all criteria.
Table 2 provided an example of weighting and normalization process of Sub-criteria of healthy aspects.
Table 2. Example of normalization calculation Healthy Aspects (C17)
Sub Category Weight Normalization
Running (C17.1) 100 0.185185185
Pull up (C17.2) 90 0.166666667
Sit up (C17.3) 80 0.148148148
Push up (C17.4) 70 0.12962963
Shuttle run (C17.5) 60 0.111111111 Swimming (C17.6) 40 0.074074074
Height (C17.7) 50 0.092592593
Weight (C17.8) 50 0.092592593
7. Develop attribute utility score on each criterion to show how well each alternative performs when considering each attribute.
8. Determine the utility value of each solution obtained by dividing an individual attribute’s score by the sum of all attribute scores. The sum of the weighted attribute scores determines the overall utility for each solution as shown in the equation below:
𝑈𝑖= ∑ 𝑤𝑗𝑢𝑖𝑗
𝑛=1 (2)
wj is the weighted importance of the jth attribute whereas uij is the utility score of the ith solution against the jth attribute.
The third component is user interface where the interaction between users and the appraisal system take place. Moreover, the user interfaces should be user friendly. For this purpose, this study used a classical
approach for designing DSS called ROMC suggested by (Sprague Jr and Carlson, 1982). According to ROMC, to design usable DSS, there are four concepts as outlined below:
1. Representation. It suggests that a DSS should have physical representations that help users select particular features, interpret the output and then communicate about the decision with other users. Furthermore, the representation should also enable users to exchange data, parameters or information for others operations. A number of interface elements which could act as representation include the icon, chart, table, map, text document, form, spreadsheet, picture, summary, or equation.
2. Operation. The interfaces of DSS should enable users to perform a specific task such as gather data, set criteria, rate alternatives, run appraisal process and generate reports. When designing the operation elements it is also important to decide the mechanisms on how they could be controlled or used by users. Examples included menu, icon or shortcut.
3. Memory aids. According to this conception, a good DSS user interface should provide users with memory aids which for instance could remind users to perform certain operation automatically. Memory aids could be also part of the interface which helps reduce the memory burden. Examples include user profile, data filter, a trigger for auto operation or even screen tips to help users identify the certain object.
4. Control aids. These are interface components which provided for helping users control the DSS through representations, operations, and memory aids. Some of the control aids could be used for non-decision
28 operation oriented such as add, edit, delete,
view, select, drag-drop, dice and slice tasks.
Therefore, control aids should be designed based on the standard conventions for user- system interaction.
By incorporating the ROMC conception, this study was able to develop the user interfaces which will be tested the ease of use in the next sub-section. Examples of the user interface design could be seen the Fig 4 (login), Fig 5 (appraisal form) and Fig 6 (results on the appraisal process) below. Fig 4 showed how the starting screen was designed as a single sign-on mechanism allowing all the different user group to have access to the system via the same interface.
Using control aids principle, this log in form was designed to enable the users perform appraisal process using the system while at the same time improving the security and stability of the system Fig 5 illustrated how the interface was designed using operational and memory aids conception to enable users to make appraisal process more practical and faster than using manual process. Finally, Fig 6 showed how the reports on the appraisal process were designed using representation conception. The conception in particular guide how the different icons put in place as the representation of the certain function.
Using this representation idea, the user felt it was more easy for the users to accomplish the appraisal process.
Fig. 4 Example of a login form
Fig. 5 Example of entry form of the appraisal form
Fig. 6 Example of a report on appraisal results
4.4. Verification phase
This final stage aimed to verify the newly developed system and to see whether it demonstrated the proof-concept of officer performance appraisal system. For this
29 purpose, a number of intended users
representing the different level of management or commander roles in the Naval Base XYZ were invited to involve in pilot testing. During the pilot testing the participants were trained how to use the system and then they were given tasks to perform appraisal process using the system on their own. On completion of the trial, a survey was run to obtain the opinion from the participants regarding the ease of use and the usefulness of the system in helping them complete the required tasks. The survey questioners were adopted from the IBM Computer Usability Satisfaction Questionnaire (CUSQ). The CUSQ has 19 questions for assessing functionality (questions 1-9), efficiency (question 10-11) and usability (question 12-19). The functionality questions tested how the system provided necessary function whereas efficiency aimed to test the ability of the system in improving the working comfort.
Finally, the Usability aspects measured how easy the user see the system. Ten participants were invited to take part in the survey and using a five point Likert scale, their answers were created with 5 being strongly agreed and 1 strongly disagreeing.
Table 3 shows the results of the surveys that indicate functionality, efficiency and usability were 89,56%, 86% and 83,75% respectively.
Therefore, the average user acceptance value was 86.4% and it can be said that the newly developed appraisal system was appropriate according to the users.
Table 3. Results on user acceptance testing
Questions
Participants
1 2 3 4 5 6 7 8 9 10
Functionality
1 5 4 4 5 5 5 5 4 5 5
2 5 5 4 5 5 5 5 5 5 5
3 5 5 4 5 5 5 4 5 4 5
4 5 5 5 4 5 4 4 4 4 5
5 4 3 4 3 3 5 4 4 5 4
6 4 4 4 3 5 4 3 4 4 5
7 5 5 4 5 5 4 4 5 5 4
8 5 5 4 5 5 5 4 4 4 4
9 5 4 4 5 5 5 4 4 4 5
Sub
Total 43 40 37 40 43 42 37 39 40 42
Total: 403
Percentage: (total/max)x100% (403/450)x100%=89.56%
Efficiency
10 4 4 4 3 5 5 5 4 5 5
11 3 4 4 4 5 4 5 4 5 4
Total 7 8 8 7 10 9 10 8 10 9
Total: 86
Percentage: (total/max)x100% (86/100)x100%=86%
Usability
12 5 2 4 3 4 5 5 4 4 5
13 4 4 4 3 3 5 3 4 4 4
14 5 3 4 4 3 5 3 4 4 5
15 5 5 4 5 5 5 4 4 5 5
16 5 4 4 5 5 4 5 5 4 4
17 4 3 4 3 5 5 4 4 4 4
18 4 4 4 4 5 5 4 4 4 4
19 5 3 4 3 4 5 4 4 5 5
Sub
Total 37 28 32 30 34 39 32 33 34 36
Total: 335
Percentage: (total/max)x100% (335/400)x100%=83.75%
4. CONCLUSION.
As introduced above, this study aims to report the development of a DSS used an appraisal system that has been widely adopted almost in the organization today. Taking into account the distinctive working culture in the military organization such as traditional, hierarchical, controlled, bureaucratic, and conservative, this study shows the complexity and challenges when it comes to design and developing the appraisal system. In particular, this study attempts to make a contribution to the literature by providing empirical evidence how such difficulties can be addressed by
30 selecting and using existing approaches. Results
indicated that the formal system development model called waterfall was useful to guide the development process successfully. This is because with its rigidity in the design steps where each of phases should be carefully done sequentially are similar to the working culture in the military organization. In term of design and develop system legacy the results showed that this study benefited from using the classical framework of common DSS components including database, appraisal model and interfaces. The results from this study also particularly show how each component was built using a particular approach. For instance, the multi criteria decision analysis technique pioneered called Simple Multi-Attribute Rating Technique (SMART) was useful for developing thinking logically in the appraisal system, whereas ROMC offered conceptions for guiding usable interfaces.
Further, the high acceptance rates obtained from the evaluation phase indicated that all the above methods, approaches and techniques have been useful for providing insights into how to successfully develop an appraisal system, especially in military context. This in particular another important contribution of this study given the majority of the studies in this area focused on experimental, simulation and model-driven instead of tested in practice. Especially in a military context, limited work has been done to report the whole process of appraisal system development. While acknowledging the contribution, this study is not free from limitation including only a small number of participants involved in the pilot testing due to the time availability. In addition, the use of SMART as logical thinking of the appraisal system may need to be verified. All these in particular suggest that future study can make an important contribution to the literature by describing the development of the appraisal system in a military context using different
methods and involving more participants to test the resulting system.
5. ACKNOWLEDGEMENT.
The author would like to show the highest gratitude to the all staffs of Naval Base XYZ Indonesia who have provided support during the initial research, the data collection until the process of writing this article.
6. BIBLIOGRAPHY.
Ahmed, I., Sultana, I., Paul, S.K. and Azeem, A.
2013. Employee performance evaluation: a fuzzy approach. International Journal of Productivity and Performance Management 62(7) 718-734.
Arreola, R.A. 2007. Developing a comprehensive faculty evaluation system: A guide to designing, building, and operating large-scale faculty evaluation systems: Anker Publishing Company.
Bournaris, T. and Papathanasiou, J. 2011. A DSS for planning the agricultural production.
International Journal of Business Innovation and Research 6(1) 117-134.
Burr, R.M. 1998. Leading change: The military as a learning organization. Marine Corps Command And Staff Coll Quantico VA.
Chang, J.-R., Cheng, C.-H. and Chen, L.-S. 2007. A fuzzy-based military officer performance appraisal system. Applied Soft Computing 7(3) 936-945.
Chou, S.-Y. and Chang, Y.-H. 2008. A decision support system for supplier selection based on a strategy-aligned fuzzy SMART approach. Expert Systems with Applications 34(4) 2241-2253.
31 Courtney Jr, J.F. and Jensen, R.L. 1981. SLIM:
System Laboratory for Information Management.
Dallas: Business Publications Inc.
Edwards, W. 1977. 12 Use of Multiattribute Utility Measurement for Social Decision Making.
Conflicting 247.
Efraim, T., Jay, E.A., Liang, T.-P. and McCarthy, R.
2011. Decision support systems and intelligent systems. Upper Saddle River, NK: Prentice Hall.
Gorry, G.A. and Morton, M.S. 1989. A framework for management information systems. Sloan Management Review 30(3) 49-61.
Gui, X., Hu, Z., Zhang, J. and Bao, Y. 2014.
Assessing Personal Performance with M-SVMs.
2014 Seventh International Joint Conference on Computational Sciences and Optimization.
Hsu, C.-Y., Goh, J. and Chang, P.-C. 2012.
Development of Decision Support System for House Evaluation and Purchasing. World Academy of Science, Engineering Technology 65.
Islam, R. and bin Mohd Rasad, S. 2006. Employee performance evaluation by the AHP: A case study.
Asia Pacific Management Review 11(3) 163-176.
Jackson, S.E., Chuang, C.-H., Harden, E.E. and Jiang, Y. 2006. Toward developing human resource management systems for knowledge-intensive teamwork. Research in personnel and human resources management. Emerald Group Publishing Limited. pp. 27-70.
Klein, H.J., Snell, S.A. and Wexley, K.N. 1987. The systems model of the performance appraisal interview process. Industrial Relations: A Journal of Economy and Society 26(3) 267-280.
Lansbury, R. 1988. Performance management: A process approach. Asia Pacific Journal of Human Resources 26(2) 46-54.
Lee, Y.Y. 2016. Development of a performance appraisal tool for postoperative anesthesia care unit nurses. Journal of Korean Academy of Nursing Administration 22(3) 270-278.
Manoharan, T., Muralidharan, C. and Deshmukh, S.
2011. An integrated fuzzy multi-attribute decision- making model for employees' performance appraisal. The International Journal of Human Resource Management 22(03) 722-745.
McCosh, A. 2004. Keynote Address. The 2004 IFIP International Conference on Decision Support Systems Prato.
Monika, D. and Mariana, S. 2015. The using of data envelopment analysis in human resource controlling. Procedia Economics and Finance 26 468-475.
Morton, M.S., and McCosh, A.M. 1968. Terminal costing for better decisions. Harvard Business Review 46(3) 147-156.
Morton, M.S.S. 1980. Decision Support Systems:
Current Practice and Continuing Challenges. Sloan Management Review (pre-1986) 21(3) 77.
Murphy, K.R. and Cleveland, J.N. 1995.
Understanding performance appraisal: Social, organizational, and goal-based perspectives: Sage.
Neary, D.B. 2002. Creating a company‐wide, online, performance management system: A case study at TRW Inc. Human Resource Management:
Published in Cooperation with the School of
32 Business Administration, The University of Michigan
and in alliance with the Society of Human Resources Management 41(4) 491-498.
Neogi, A., Mondal, A.C. and Mandal, S.K. 2011. A cascaded fuzzy inference system for university non- teaching staff performance appraisal. Journal of Information Processing Systems 7(4) 595-612.
Olson, D.L. 1996. Decision aids for selection problems: Springer Science & Business Media.
Osman, I.H. et al. 2011. Data envelopment analysis model for the appraisal and relative performance evaluation of nurses at an intensive care unit.
Journal of medical systems 35(5) 1039-1062.
Ozkan, C., Keskin, G.A. and Omurca, S.I. 2014. A variant perspective to performance appraisal system: Fuzzy C–means algorithm. International Journal of Industrial Engineering 21(3).
Raoudha, H., El Mouloudi, D., Selma, H. and Abderrahman, E.M. 2012. A new approach for an efficient human resource appraisal and selection.
Journal of Industrial Engineering and Management 5(2) 323-343.
Rasheed, M.I., Yousaf, H.D.A.S. and Noor, A. 2011.
A critical analysis of performance appraisal system for teachers in public sector universities of Pakistan:
A case study of the Islamia University of Bahawalpur (IUB). African Journal of Business Management 5(9) 3735-3744.
Risawandi, R.R. 2016. Study of the simple multi- attribute rating technique for decision support.
Decision-making 4 C4.
Roberts, G.E. 2003. Employee performance appraisal system participation: A technique that works. Public personnel management 32(1) 89-98.
Siregar, D. et al. 2017. Research of Simple Multi- Attribute Rating Technique for Decision Support.
Journal of Physics: Conference Series, IOP Publishing.
Soltani, E., Van der Meer, R., Williams, T.M. and Lai, P.-c. 2006. The compatibility of performance appraisal systems with TQM principles–evidence from current practice. International Journal of Operations & Production Management 26(1) 92- 112.
Sprague Jr, R.H. and Carlson, E.D. 1982. Building effective decision support systems: Prentice Hall Professional Technical Reference.
Taylor, J.M. and Love, B.N. 2014. Simple multi- attribute rating technique for renewable energy deployment decisions (SMART REDD). The Journal of Defense Modeling and Simulation 11(3) 227-232.
Tuytens, M. and Devos, G. 2012. Importance of system and leadership in performance appraisal.
Personnel Review 41(6) 756-776.
Valiris, G., Chytas, P. and Glykas, M. 2005. Making decisions using the balanced scorecard and the simple multi-attribute rating technique. Performance Measurement and Metrics 6(3) 159-171.
Yee, C. and Chen, Y. 2009. Performance appraisal system using multifactorial evaluation model. World Academy of Science, Engineering and Technology 53(2009) 231-235.