• Tidak ada hasil yang ditemukan

CHAPTER III RESEARCH METHODOLOGY

F. Models and Techniques of Analysis Data

Model of analysis data used in this study is Multiple Linear Regression Analysis. Purbayu (2005) suggested that the multiple correlations are the relationship of several independent variables with the dependent variable.

If a dependent variable depends on more than one independent variable, the relationship between the two variables is called multiple regression analysis (Wahid Sulaiman, 2004: 80).

Multiple linear regression equation is as follows:

Y =α + β1X1 + β2X2 + e Description:

Y : Audit Quality.

X1 : Auditor Competence.

X2 : Auditor Independence.

α : Constant.

β : Coefficient Regression.

e : Error.

34

2. Techniques of Analysis Data a) Data Quality Testing

The Commitment of measuring and testing of a questionnaire or hypothesis depends heavily on the quality of the data used in testing. The research data will not be useful if the instrument which is used to collect research data does not have reliability (level of reliability) and validity (the degree of truth / validity high).

Testing the measurement of each show consistency and accuracy of data collected. Testing the validity and reliability of the questionnaire in this study is using SPSS (Statistical Product and Service Solutions).

1) Validity Test

Validity is a measurement that indicates the extent to which the measuring instrument is able to measure what you want to measure (Purbayu, 2005: 247). Validity testing is intended to measure how real a test or instrument. Measurement is valid if it truly measures the objectives.

Testing the validity of data in this research is conducted statistically by calculating the correlation between each of these questions with a total score using Pearson Product Moment Correlation. Data declared valid if the score r-count which is the value

of Corrected Item-Total Correlation > r-table on significance 0.05 (5%).

2) Reliability Test

Reliability is a measure which shows consistency of the measuring instrument in measuring the same phenomenon in other occasions (Purbayu, 2005: 251). Reliability of a variable which is formed from the list of questionnaire is good if it has Cronbach’s Alpha value > 0,60.

3. Classical Assumptions Test

To obtain unbiased result atau Best Linear Unbiased Estimator / BLUE, then the regression model must fulfill several assumptions which is called classical assumptions. The classical assumptions, namely:

a) Normality test

Normality test aims to test whether the regression model, both dependent and independent variables have a normal distribution or not (Ghozali 2005: 110). Good regression model must have a normal data distribution.

Data Normality test in this study uses SPSS for Windows application for testing the sample data for each variable. In detecting the normality of the data through the normal output curve graph pp plot, a variable is called normal if the distribution picture in which the data points are spread around the diagonal line and the spread of the

36

data points follow the direction of the diagonal line (Nugroho, 2005:

24 in Jimmy, 2007).

In addition to using the graph p-plot, testing normality of the data was also performed using the histogram curve. Normality of data when viewed by means of the curve can be determined based on the histogram picture curve, i.e. when the shape of the curve has a slope that tends to balance on either the left or right side and a bell-shaped curve that is almost perfect.

b) Multicollinearity

This test is intended to detect symptoms of correlation between the independent variable with the other independent variables.

Multicollinearity assumption states that the independent variable must be free of symptoms of multicollinearity. Multicollinearity symptom is a symptom correlation, between variables independent.

This phenomenon is demonstrated with significant correlations between independent variables.

In the event of symptoms of multicollinearity, one of the steps to improve the model is to eliminate variables from the regression model, so it can be selected the best models (Purbayu, 2005: 238).

Wahid Sulaiman (2004: 89), multicollinearity means that there is a perfect linear relationship among some or all of the independent variables from the regression model.

Multicollinearity test can be conducted in two ways, namely by looking at the VIF (Variance Inflation Factors) and the value of tolerance. If VIF < 10 and tolerance values > 0.10 then no symptoms of Multicollinearity (Ghozali, 2005: 92).

c) Heteroscedasticity

Heteroscedasticity test aims to test whether the regression model occur unequal variance from one observation to other observation residuals which remain, or called Homoscedasticity (Ghozali, 2005: 105).

A good regression model is homoscedasticity, no heteroscedasticity (Ghozali, 2005: 105). Heterocedastity assumption is the assumption in regression where variance from residuals is not equal for one observation to other observations.

In regression, one of the assumptions that must be met is that the variance of the residuals from an observation to other observations is not similar between the variance of the residuals. Symptoms of unequal variances is called heterocedastity symptoms, whereas the presence of residual symptoms similar to observations from divulging other observations called homoscedasticity, one of the way to test this is by seeing heteroscedasticity spread of residual variance (Purbayu, 2005: 242).

38

d) Autocorrelation Test

Autocorrelation is used for the sole purpose of knowing whether there is a correlation between members of a set of observed data and analyzed according to a (cross section). This test aims to see whether there is a residual in an observation with other observation on the model. In these research, researcher uses Durbin Watson test. A model that can be expressed is not the case if probability value autocorrelation symptoms Durbin Watson > 0.05 (Wibowo 2012:106).

4. Hypothesis Test

Hypothesis test in this research will be tested by using linear regression analysis, the analysis used to determine the extent of the influence of the competence and independence of auditors as independent variables on audit quality as the dependent variable. To test hypotheses about the competence and independence of the auditor simultaneously and partially effect significantly on audit quality, hypothesis simultaneously effect with the F test, and partially by t test.

a) Partial Test (t test)

T-test is used to determine the effect of each independent variable on the dependent variable. T test is carried out by comparing t-count with t-table. Determining the value of t-table is determined by the significance level of 5% with degrees of freedom df = (n-k- 1) where n is the number of respondents and k is the number of variables.

Testing criteria used are:

If t-count> t-table (n-k-1) then Ho is rejected If t-count <t-table (n-k-1) then Ho is accepted

In addition, the t-test can also be seen from the probability value (p value) compared with 0.05 (Level of significance α = 5%).

The test criterion used is

If the p value <0.05, then Ho is rejected If the p value> 0.05 then Ho is accepted

To find out how much percentage contribution of the independent variables X1, X2 partially on audit quality as the dependent variable can be seen from the magnitude of the coefficient of determination (r2). Where r2 explain how much the independent variables used in this study could explain the dependent variable.

b) Simultaneous Test (F Test)

The F test is used to determine whether there is influence simultaneously the independent variables to the dependent variable.

Proof is conducted by comparing the value of F with F table at the 95% confidence level and degrees of freedom (degree of freedom) df

= (nk-1) where n is the number of respondents and k is the number of variables.

40

Testing criteria used are:

If F-count > F-table (n-k-1) then Ho is rejected

Meaning of statistical data that is used proves that all the independent variables (X1 and X2) effect on the value of the variable (Y).

If F-count < F-table (n-k-1) then Ho is accepted

Meaning of statistical data that is used proves that all the independent variables (X1 and X2) do not affect the value of the variable (Y).

In addition, the F test can also be seen from the probability value (p value) compared with 0.05 (Level of significance α = 5%). The test criteria used are:

If the p value <0.05, then Ho is rejected If the p value> 0.05 then Ho is accepted

With a level of significance in this study using the alpha 5% or 0.05 F test results can then be calculated with SPSS ANOVA table.

Furthermore, to determine how large a percentage of the contribution of the independent variables X1, X2 together on audit quality as the dependent variable can be seen from the magnitude of the coefficient of determination (r2). Where r2 explain how large independent variables used in this study could explain the dependent variable.

41

Dokumen terkait