The analytical tool used in this research is multiple linear regressions using SPSS 23, where the regression equation contains elements of interaction (Multiplication of two or more independent variables). This interaction test used to determine the extents of interaction between variables which are audit quality, earning management and earning quality.
1. Descriptive Statistic
Descriptive statistics provide an overview or description of the data seen from the mean, standard deviation, variance, maximum, minimum, sum, range, kurtosis, and skewness (Ghozali, 2012).
Descriptive statistics are based on data that has been collected and then analyzed. This analysis is used to provide a description of the research variables (audit quality, earnings management, and earning quality) which can be seen from the amount of data, maximum, minimum, average number, range, and standard deviation.
2. Classical Assumption Test
Once the estimator model has been decided, further tests are conducted to ensure that the regression model is of the best linear unbiased
47 estimate (BLUE), Assumptions must be meet the normality test, multicolinearity test, heteroscedasticity test, and autocorrelation test.
a. Normality Test
The test will be conducted to multiple directorship and political connection, Normality test aims to test whether the regression model, the independent variable, dependent variable or both have a normal distribution or not. The regression model that has a data distribution is normal or near-normal regression model is said to be good (Ghozali, 2011).
There are two ways to detect whether or not residual normal distribution, namely by looking at the analysis graph normal probability plot and statistical tests. But this research will perform the process of normality tests for the data with Kolmogorov-Smirnov (K-S) test.
Histogram and Probability plot test is also used in this test.
b. Multicollienearity Test
According to Jaggia and Kelly (2013), when a regression model is multicollinear, it presents with some problems. First, it will be very difficult to distinguish the individual impact of one independent variable, because it is related to other independent variable(s). Second, if the model is highly multicollinear, the independent variables can be statistically insignificant or that the parameter estimates carry wrong directional sign. Kothari (2015) suggested the use of the variance
48 inflation factor (VIF) to determine whether an independent variable is not only correlated with the dependent variable but also is correlated with the other independent variable(s). If the VIF value is greater than 10, the model is said to be multicollinear and the problematic independent variable must be eliminated from the analysis.
c. Heteroscedasticity Test
A good research model should have constant variance of error for all observations, regardless of time and objects involved. This homoscedastic condition is required before the tests of hypothesis can be performed (Wooldridge, 2013). To test the presence of heteroscedasticity issue, the Breusch-Pagan test is used. If the p-value of the test is less than a given significance level which is normally established at 5%, the model is heteroscedastic.
d. Autocorrelation Test
This test aims to test whether in linear regression model there is a correlation between confounding error on period t with confounding error on period t-1 (previous year). This symptoms cause consequences that confidence interval becomes wider and also the variance and standard error will be interpreted too low. Autocorrelation test done in this research is using Run test. This test is aims to test whether there is a high correlation between residuals. There is an autocorrelation between residuals is when the value of Asymp. Sig (2-tailed) is significant or
49 below 0.05. If more than 0.05 or not significant, it‘s mean that there is no autocorrelation between residuals (Janie, 2012).
3. Test of Hypothesis
a. Global Significance Test (F-test)
With the aim of knowing the overall extent of independent variable(s) predicting the change the dependent variable, it is important to check whether all coefficients of independent variable(s) are statistically significant (Nachrowi and Usman, 2006). To do so, the F- stat test is performed. Using a given significance level, α, at 1% or 5%
or 10%, the model is said to be significant once the probability of F-test falls below α.
b. Variable Significance Test (F-test)
There is a possibility that while the model is significant in explaining the relationship between independent variable(s) and dependent variable, some individual variables do not possess the characteristic of statistical significance (Jaggia and Kelly, 2013;
Nachrowi and Usman, 2006). In this case, the t-stat test is used to measure the significance of each independent variable in explaining the change in the dependent variable. Using a given significance level, α, at 1% or 5% or 10%, an independent variable is said to be significant once the probability of t-stat falls below α.
c. Coefficient Determination Test (R square-test)
50 The R-square test is used to measure how close each observation is to the regression line produced by the entire data set (Jaggia and Kelly, 2013). With the values ranging from 0 to 1, the higher the value of R-square, the higher the percentage of response variable variation that is explained by the linear regression model d. Adjusted R square Test
The R-square value will rise if more independent and control variables are added up into the regression equation. This will render the R-square value inaccurate when the model actually includes unimportant or inappropriate variables (Jaggia and Kelly, 2013). The adjusted R-square test will show how much the linear regression model could explain the variable variations after discounting the effect of the quantity of independent variables used in the linear equation of the model.
E. Research Variables Operationalization