• Tidak ada hasil yang ditemukan

A. Scope of Research

2. Secondary Data

Secondary data are the data that have already been collected for purposes other than the problem at hand (Malhotra, 2004: 102). According to Istinjanto(2009: 38) secondary data mean second or indirectly from the sources but from another source. Secondary data are a kind of collecting data that is not attempted by the research; it is done by the others. The data can be obtained from literatures, journals of previous research, magazine, nor document data that are needed in doing this research.

D. Methods of Data Analysis 1. Data Quality Test

a. Validity test

Validity is the extent to which differences in observed scale scores reflect true differences among objects on the characteristic being measured, rather than systematic or random error (Malhotra, 2004: 269).

According to Riduwan and Kuncoro (2008:216) explained that validity is a measure that indicate the level of reliability of a measuring instrument. In

48 order to test the validity of measuring instrument, there must be a correlation between each parts of the measuring instrument as a whole and then correlating every measuring instrument with the total score in which the number for each score point.

According to Ghozali (2011), validity of the research is used in order to measure accuracy of an item in the questionnaire. By using Corrected Item- Total Correlation to find a significance of coefficient correlation with α = 0,50. All items that reach the minimum coefficient of correlationα = (0,50) is assumed satisfying.

b. Reliability test

A tool of measurement is said reliable if that tool in measuring a symptom on different times is always showing the same result. Thus, the tools that are consistently reliable always giving the same result (Priyatno, 2013:

30).

According to Sugiyono (2009:456) reliability is often defined as the consistency and stability of data findings. From a positivistic perspective, reliability typically is considered to be synonymous with consistency of data produced by observation made by different researcher at different times.

Reliability refers to the extent to which a scale produces consistent result if repeated measurements are made on the characteristics (Malhotra, 2004: 267). This research will use one time measurement that using Cronbach alpha test (α). According to Nunnally in Priyatno (2013: 30) a variable is said to provide reliable if the Cronbach alpha values > 0.60.

49 2. Classic Assumption Test

a. Normality

Normality test is done in order to know how the datapopulation distributed normally or not. The level of normality of data is highly crucial because if data is normally distributed, it means the data can represent the population (Priyatno, 2010: P71).

Normality data test in this research will be done by using Kolmogorov smirnov test. According to Priyatno (2012: 38) the hypotheses in one sample kolmogorov-smirnov test are:

a. Null hypothesis (Ho) : Data is normally distributed

b. Alternative hypothesis (Ha) : Data is not normally distributed With the criteria as follows:

1) If significant value (Asym Sig 2 tailed) > 0.05, data is normally distributed.

2) If significant value (Asym Sign 2 tailed) < 0.05, data is not normally distributed.

b. Multicollinearity

According to Priyatno (2012: 56) multicollinearity is a condition where the relationship of perfect linear or nearly perfect among interdependent variables happens in regression model. A regression model is said as experiencing multicollinearity if there is perfect linear function on several or all independent variables in linear function. Then, the influence between independent and dependent variables will be hard to be obtained. In a good regression model, the correlation among independent variables should not happen.

50 The way to know either there is sypmtomp of multicollinearity or not is by seeing the value of Variance Inflation Factor (VIF) and tolerance, if thevalue of VIF is less than 10 and value of tolerance is more than 0.1, thus, it is stated that multicollinearity does not occur (Ghozali in Priyatno, 2012: 56).

c. Heteroscedasticity

Heteroscedasticity is a condition where there is variance dissimilarity from residual of all observations in regression model. A good regression model is when heteroscedasticity does not occur. One of several methods that can be used for measuring the heteroscedasticity is regression graphic (Priyatno, 2012: 62). The researcher determines the regression graphic will be used for this research.

Graphic method conducted by seeing the points pattern on scatterplot regression. If the points spread with no clear pattern, below and above value 0 in Y axis, thus heteroscedasticity does not occur. Then, if there is certain pattern such points that create regular pattern (wavy, widened, and then narrowed), thus heteroscedasticity occurs (Priyatno, 2012: 69).

Glejser test can also be done for measuring heteroscedasticity and glejser test is done by doing regression for independent variables towards the value of absolute residual. If significance value among independent variables with residual is more than 0,05, then the heteroscedasticity does not occur (Ghozali in Priyatno, 2012: 62).

51 3. Hypothesis test

a. Simulant Test (F-Test)

Baroroh (2012: 2) has defined regarding F test, this test is done in order to know the influence of independent variables towards dependent variable simultaneously.

Simultaneous test according to suliyanto (2004: 65) by using the level of significance (α) = 0,05 and the hypothesis can be depicted as follows:

H0 :bj = 0: People, product& process, partnership simultaneously do not have influence in increasing well-being.

Ha :bj ≠ 0: People, product& process, partnership simultaneously have influence in increasing well-being.

Baroroh (2012: 3) defines if F count is bigger than F table, then minimally there is one independent variable that influences dependent variable. While, if F count is less than F table, it means all independent variables do not influence dependent variable.

F count < F table, then H0 is accepted, it means independent variables simultaneously do not influence towards dependent variable.

F count > F table, then H0 is rejected, it means minimally there is one independent variable that influences dependent variable.

b. Partial Test (t-Test)

This test is used to know whether independent variables partially influence towards dependent variable, or not, by assuming other independent variables are constant (Levine, 2011: 326). Partial test according to suliyanto (2004: 65) by using the level of significance (α) = 0,05 and the hypothesis can be depicted as follows:

52 Ho 1: People partially do not have influence in increasing well-being

Ha 1: People partially have influence in increasing well-being

Ho 2: Product & Process partially do not have influence in increasing well- being

Ha 2: Product & Process partially have influence in increasing well-being Ho 3: Partnership partially does not have influence in increasing well-being Ha 3: Partnership partially has influence in increasing well-being

Baroroh (2012: 4) has also explained that if the value of t count is bigger than t table, or the value of probability count is less than α (α = 0.05) and It means the rejection towards Ho will happen. While in the other sides, if the value of t count is smaller than t table, or the value of probability count is higher than α (α = 0.05) and it depicts the independent variables do not have influence towards dependent variable. Suliyanto (2004: 65) has also defined the formula regarding t test, as follows:

a) T count ≥ T table or Sig. (Asym Sig 2 tailed) < α: Ho is rejected b) T count ≤ T table or Sig. (Asym Sig 2 tailed)> α: Ho is accepted

According to Sugiyono (2009:97) the value of T count is absolute so it is not seen as positive (+) or negative (-)

Dokumen terkait