• Tidak ada hasil yang ditemukan

08832323.2013.800467

N/A
N/A
Protected

Academic year: 2017

Membagikan "08832323.2013.800467"

Copied!
10
0
0

Teks penuh

(1)

Full Terms & Conditions of access and use can be found at

http://www.tandfonline.com/action/journalInformation?journalCode=vjeb20

Download by: [Universitas Maritim Raja Ali Haji] Date: 11 January 2016, At: 20:37

Journal of Education for Business

ISSN: 0883-2323 (Print) 1940-3356 (Online) Journal homepage: http://www.tandfonline.com/loi/vjeb20

Seeking Empirical Validity in an Assurance of

Learning System

Sherry L. Avery, Rochell R. McWhorter, Roger Lirely & H. Harold Doty

To cite this article: Sherry L. Avery, Rochell R. McWhorter, Roger Lirely & H. Harold Doty (2014) Seeking Empirical Validity in an Assurance of Learning System, Journal of Education for Business, 89:3, 156-164, DOI: 10.1080/08832323.2013.800467

To link to this article: http://dx.doi.org/10.1080/08832323.2013.800467

Published online: 06 Mar 2014.

Submit your article to this journal

Article views: 72

View related articles

(2)

JOURNAL OF EDUCATION FOR BUSINESS, 89: 156–164, 2014 CopyrightC Taylor & Francis Group, LLC

ISSN: 0883-2323 print / 1940-3356 online DOI: 10.1080/08832323.2013.800467

Seeking Empirical Validity in an Assurance

of Learning System

Sherry L. Avery, Rochell R. McWhorter, Roger Lirely, and H. Harold Doty

University of Texas at Tyler, Tyler, Texas, USA

Business schools have established measurement tools to support their assurance of learning (AoL) systems and to assess student achievement of learning objectives. However, business schools have not required their tools to be empirically validated, thus ensuring that they measure what they are intended to measure. The authors propose confirmatory factor analysis (CFA) be utilized by business schools to evaluate AoL measurement systems. They illustrate a CFA model used to evaluate the measurement tools at their college. The authors’ approach is in its initial steps, currently evaluating individual measurement tools, but the authors are working toward developing a system that can evaluate the entire AoL measurement system.

Keywords: AACSB, assessment, assurances of learning, confirmatory factor analysis

A decade ago, the Association to Advance Collegiate Schools of Business (AACSB) International ratified new accreditation requirements including the addition of assurance of learn-ing (AoL) standards for continuous improvement (Martell, 2007). As part of this addition, schools seeking to earn or maintain AACSB accreditation must develop a set of de-fined learning goals and subsequently collect relevant as-sessment data to determine direct educational achievement (LeClair, 2012; Sampson & Betters-Reed, 2008). The estab-lishment of the mission-driven assessment process requires “well-documented systematic processes to develop, monitor, evaluate, and revise the substance and deliver of the curricula on learning” (Romero, 2008, p. 253).

With establishment of the 2003 AACSB standards, all schools “must develop assessment tools that measure the effectiveness of their curriculum” (Pesta & Scherer, 2011, p. 164). As a response to this outcomes assessment man-date, a number of schools created models to depict and track their assessment functions (Betters-Reed, Nitkin, & Samp-son, 2008; Gardiner, Corbitt, & Adams 2010; Zocco, 2011). However, the question arises as to the validity of system models for measuring learning outcomes—does the model measure what it purports to measure—do the learning ex-periences accomplish the learning goals outlined in the sys-tems model? This question is a very important one because

Correspondence should be addressed to Sherry L. Avery, University of Texas at Tyler, College of Business and Technology, 3900 University Boulevard, Tyler, TX 75799, USA. E-mail: savery@uttyler.edu

once validity of a measurement system is established, then it provides confidence in a program and quality of assurance in achieving the school’s mission (Baker, Ni, & Van Wart, 2012).

The purpose of this article is to illustrate development of an empirically based AoL system that may be used by other business schools seeking accreditation. Relevant literature on this topic will be examined next.

REVIEW OF LITERATURE

A search of the literature for an empirically validated AoL system yielded results for research covering either the vali-dation of AoL tools or valivali-dation of an AoL model. Each is discussed in the following section.

Measures of Validity in AoL Assessment Tools

Measures of validity associated with AoL learning outcomes were located in business literature by reviewing articles that described locally developed assessment tools and externally validated instruments. For instance, researchers developed an assessment tool to explore students’ self-efficacy toward service and civic participation. They utilized traditional scale development and confirmatory factor analysis (CFA) and si-multaneous factor analysis in several populations for insur-ing the validity and reliability of their instrument to measure AoL criteria for ethics and social responsibility (Weber, We-ber, Sleeper, & Schneider, 2004). Another tool offered was a

(3)

SEARCHING FOR EMPIRICAL VALIDITY 157

content valid assessment exam created to measure business management knowledge (Pesta & Scherer, 2011).

Also, a matrix presented by Harper and Harder (2009) depicts demonstrated abilities intersected with competency clusters; the clusters were developed from literature describ-ing valid research into “the kinds of knowledge and skills that are known to be necessary for success as a practitioner in the MIS field” (p. 492). However, no statistical measures of va-lidity were provided. Additionally, instances of use of exter-nally validated instruments such as the revised version of the Defining Issues Test to assess ethical reasoning instruction in undergraduate cost accounting (Wilhelm & Czyzewski, 2012) and use of the CAPSIM computer simulation to assess business strategy integration (Garrett, Marques, & Dhiman, 2012) were found.

Measures of Validity in AoL Assessment Models

Various models have been offered for outcomes measure-ment as part of a processed-based approach for meeting AoL standards (i.e., Beard, Schwieger, & Surendran, 2008; Betters-Reed et al., 2008; Hess & Siciliano, 2007) but with-out statistical evidence or discussion of validity measures. However, the search of literature found an article by Zocco (2011) that presented a recursive model to address and docu-ment continuous and systematic improvedocu-ment and discussed validity issues surrounding the application of recursion to a process such as AoL. Although helpful for looking at school improvement, the model does not measure validity of the model itself. Therefore, the review of literature offered sev-eral tools and a model with validity calculations, however, no example of an empirically validated system was found.

CASE STUDY: ASSURANCES OF LEARNING AT THE UNIVERSITY OF TEXAS AT TYLER

During the past five years, the College of Business at the University of Texas at Tyler (hereafter referred to ascollege) has conducted a complete redesign of its AACSB AoL sys-tem. To understand the rationale for this design change it is important to explore several key drivers of this decision, especially in light of the fact that our prior AoL system was cited as one or our best practices during our last maintenance of accreditation visit. At last visit, the college operated three different and largely unrelated assessment systems: one for the AACSB, one for The Association of Technology, Man-agement and Applied Engineering, and one for the Southern Association of Colleges and Schools (SACS). In some ways, these independent assessment systems simplified accredita-tion reporting: each system was tailored to the specifics of a single accrediting body and the data associated with one system were not considered in collaboration with data col-lected for a different accrediting body. For example, AACSB and the college’s assessment procedures were treated as com-pletely independent. This approach simplified reporting, but

hindered integrating different assessment data in the larger curriculum management process.

A second major contextual factor relevant to our AoL process was feedback from our last AACSB visit that recom-mended revisions to the vision and mission statements for the college and AACSB AoL processes. As part of that revision process, the college clarified its mission and identified five core values.

Based largely on these contextual factors, faculty deter-mined we were at an ideal point to design a new single integrated assessment model to meet the needs of each of our accrediting bodies. Further, we determined that the new system should be linked to the new mission by incorpo-rating the core values as learning outcomes, and that we should attempt to assess the validity of the system in terms of both the theoretical model used to design the system and the measurement model used to organize the data collection. These additional steps would allow more confidence in the evidence-based changes we were making in program struc-ture and course curriculum. The full-scale implementation began in the 2010–2011 school year; our model is more fully described next.

Faculty-Driven Process

AoL in the college is a faculty-driven process. Oversight of this process is charged to the AoL committee, a committee comprised of a faculty chair, the undergraduate program di-rector, the graduate programs coordinator and four at-large faculty members. The composition of the committee provides cross-sectional representation of all disciplines and programs in the college.

The committee works closely with our faculty to ensure that each learning objective is measured periodically, at least twice during each five-year period but generally more of-ten. The faculty employ a variety of measurement strategies, including major field tests, embedded test questions, case analyses, observation of student presentations, activity logs, simulations, and other class assignments or projects. Analy-ses of results guide the committee in its work with the faculty to develop and implement appropriate actions to ensure cur-ricula and pedagogy are managed in a manner enhancing student learning and development. Figure 1 illustrates how the AoL assessment process operates in a continuous im-provement mode.

Conceptual Framework

The AoL system in the college is based on a set of shared core values: professional proficiency, technological competence, global awareness, social responsibility, and ethical courage as seen in Figure 2. These mission-based core values form the framework for our comprehensive, empirically validated AoL models for both the bachelor in business administration program and the master of business administration program, as well as other college programs that are outside the scope of AACSB accreditation. AoL in the college has evolved

(4)

158 S. L. AVERY ET AL.

FIGURE 1 Assurance of learning curriculum management process at the University of Texas at Tyler (color figure available online).

to the point where our current system is second-generation, that is, it is the culmination of an assessment of the AoL system itself. Many of the best features of the prior sys-tem were retained, including assessment of discipline-based knowledge, communication skills, and the use of quantitative tools and business technology. The result of this process is a value-based conceptual framework whose efficacy can be tested empirically using confirmatory factor analysis. To our knowledge, our college is the first AACSB-accredited pro-gram to design an empirically validated AoL system. Figure 2 depicts the conceptual framework of our AoL system for the bachelor in business administration program.

METHOD

Data Collection

The faculty developed 10 learning objectives to support the five learning goals of the college. A measurement tool was designed for each objective, such as the Major Field Test

or rubrics. Assessment was conducted within required core business courses that included students across all college of business majors. Students were generally either junior or seniors in one of the business majors. Results were then col-lected and compiled centrally in an administrative function within the college.

Analysis Approach

Several of the measurement tools included a number of items that collectively assessed the specific learning objective. CFA was conducted to assess empirical validity of the items measures and learning objectives. CFA was chosen for assessment because it tests how well the measured variables represent the constructs (Hair, Black, Babin, Anderson, & Tatham, 2005).

The items that comprise each construct are identified prior to running the CFA analysis. We then confirm or reject that the items properly reflect the construct, in this case the learn-ing objective. CFA was conducted on six of the learnlearn-ing

(5)

FIGURE 2 Conceptual framework of bachelor in business administration program at the University of Texas at Tyler (color figure available online).

159

(6)

160 S. L. AVERY ET AL.

TABLE 1 Outcomes

Outcomes Means of assessments Empirical validity assessment

Professional Proficiency 1: Students demonstrate that they are knowledgeable about current business theory, concepts, methodology, terminology, and practices.

Major Field Test (MFT) Confirmatory factor analysis

Professional Proficiency 2: Students can prepare a business document that is focused, well organized, and mechanically correct.

Rubric assessed writing assignment Confirmatory factor analysis

Professional Proficiency 3: Students are able to deliver a presentation that is focused, well organized, and includes appropriate verbal and nonverbal behaviors.

Rubric assessed oral presentation assignment Confirmatory factor analysis

Technological Competence 1: Students demonstrate understanding of information systems and their role in business enterprises.

Section V (Information Systems) subscore from MFT

None (single item)

Technological Competence 2: Students are able to use business software, data sources, and tools.

Rubric assess technology project Sample size too small to run confirmatory factor analysis

Global Awareness 1: Students demonstrate awareness of global issues and perspectives.

Global Awareness Profile (GAP) test, standardized external exam

Confirmatory factor analysis

Global Awareness 2: Students are knowledgeable of global issues and perspectives that may impact business activities.

Rubric assessed global business case Confirmatory factor analysis

Social Responsibility: Students exhibit an understanding of social consequences of business activities.

Business case 10 yes or no questions, cannot assess using

confirmatory factor analysis

Ethical Courage 1: Students understand legal and ethical concepts.

MFT, section VII (Legal and Social Environment) sub-score

None (single item from standardized external exam)

Ethical Courage 2: Students make ethical decisions. Ethics game assessment of decision-making : Binary data—unable to assess using confirmatory factor analysis

objectives. We were unable to run CFA for some learning objectives because they were either measured by a single item, the sample size was too small, or data was binary, therefore negating the applicability of using CFA. Table 1 details the learning objectives, measurement tools, and when the CFA was conducted. In the following section, we discuss the general approach used in the CFA analysis. Then, we follow up with two examples of the CFA analysis; the first example is empirically valid, and the second example was not empirical valid. We used a combination of the software tools SPSS AMOS (vers. 21, IBM, Meadville, PA) to support the analysis.

For each of the CFA analyses, we followed a three-step approach documented in many of the leading academic jour-nals: (a) review of the raw data, (b) assessment of model fit, and (c) assessment of construct validity. Prior to the CFA analyses, we reviewed the data for sample size, outliers, miss-ing data, and normality. We determined if the sample size was adequate for the model based on suggested requirements that range from five to 20 observations per variable. (Hair et al., 2005). The existence of outliers along with their potential impact on normality and the final results were assessed at both the univariate and multivariate level by reviewing the Mahalanobis distance (D2) calculation for each case. Next, we identified missing data and assessed the potential impact

on analysis. A case that has a substantially different value from other D2 calculations is a potential outlier. Then, we identified the amount of missing data and assessed the poten-tial impact on the analysis. Finally, we assessed normality by reviewing both skewness and kurtosis at the univariate and multivariate level. Values of zero represent a normal distri-bution. For skewness, less than three is acceptable (Chou & Bentler, 1995; Kline, 2005). For kurtosis, Kline stated that less than 8 is reasonable with greater than 10 indicating a problem and over 20 an extreme problem.

We evaluated how well the data fit the measurement model using AMOS software. We used maximum-likelihood esti-mation, which is a widely used approach and is fairly robust to violations of normality and produces reliable results un-der many circumstances (Hair et al., 2005; Marsh, Balla, & McDonald, 1988). First, we evaluated the chi-square statis-tic, which measures the difference between the observed and estimated covariance matrices. It is the only statistic that has a direct statistical significance and is used as the basis for many other fit indices (Hair et al., 2005). Statistical signifi-cance in this case indicates that an error of approximation or estimation exists. Many researchers question the validity of the chi-square statistics (Bentler, 1990), so if it is significant additional indices should be used to assess overall evalu-ate model fit. The root mean square error of approximation

(7)

SEARCHING FOR EMPIRICAL VALIDITY 161

(RMSEA) is a standardized measure of the lack of fit of the data to the model (Steiger, 1990). It is fairly robust in terms of small sample size (i.e., 250 or less). Thresholds of .05–.08 have been suggested with Hu and Bentler (1999) recommending a cutoff close to .06. The Bentler-Bonnet (1980) nonnormed fit index (NNFI) was used because it also works well with small sample sizes (Bedeian, 2007). Generally .90 or better is considered adequate fit with Hu and Bentler (1999) suggesting a threshold of .95 or better for good fit. The NNFI and comparative fit index (CFI) are incre-mental fit indices in that they assess model fit by referring a baseline model (Bentler 1990; Bentler & Bonett, 1980; Hu & Bentler, 1999). The NNFI and CFI generate values between 0 and 1 with .90 or greater representing adequate fit (Bedeian 2007; Hu & Bentler, 1999).

The final step was to assess construct validity, which is the extent to which a set of measured items accurately reflect the theoretical latent construct the items were designed to measure (Hair et al., 2005). The standardized factor loadings should be statistically significant and .5 or higher (Hair et al., 2005). Convergence validity was assessed by calculating the average variance extracted (AVE) and construct reliability (CR). The average percent of variance extracted among a set of construct items is a summary indicator of convergence. AVE of .5 or higher suggests adequate convergence. Less than 5 indicates that on average more error remains in the items than variance explained by the latent construct (Fornell & Larcker, 1981). High CR reliability indicates that the mea-sures consistently represent the same latent construct. Values of .6 or 7 are acceptable, with .7 or higher indicating good reliability (Nunnally, 1978).

RESULTS

The purpose of this article is to illustrate the method we used to assess empirical validity of our learning objectives to aid other business schools in their AoL journey. It is not our goal to suggest that our measurement tools or learning objectives should be universally adopted. Therefore for illustration of our process, we limit our discussion to the overall fit of our constructs and then discuss in detail two examples of

how we used CFA; an example of a valid measure (business knowledge) and an example of a measure that requires some modifications (oral communication).

Table 2 documents the model fit indices for the CFA analy-ses performed for six of the learning objectives. The learning objectives for business knowledge, written communication, global awareness context, region, and perspectives are valid for construct reliability and model fit. Therefore we are rea-sonably confident that these objectives adequately measure the learning goals established by the college. The model fit for oral communication was below suggested thresholds.

Business Knowledge

Students take the Educational Testing System Major Field Test (MFT) for the bachelor’s degree in business in a cap-stone class in their senior year. The MFT is a widely used standardized exam for business students. The capstone class is required for all business majors and the MFT is adminis-tered in all sections of the capstone class thus ensuring that all business majors participate in the assessment prior to grad-uation. To ensure that students take the exam seriously and give their best effort, their results are reflected in their course grade. Two hundred and eighteen responses were obtained from the exams administered in 2010–2011. Nine composite scores from the exam are used to assess the overall business knowledge of the student (see Table 3 for a listing of these nine items). A review of the data found no missing data or significant outliers. The sample size divided by the number of responses to the number of variables (218 / 9=24) was well above the recommended threshold range of 5–20. The kurto-sis and skewness statistics were less than the recommended thresholds of 8 and 3, which indicates only a slight depar-ture from normality. Therefore we were reasonably confident in proceeding to the next phase of the analysis, evaluating model fit by running a confirmatory factor analysis on the item measures.

The chi square was significant, however the RMSEA was .077, below the recommended threshold of .08. The RMSEA is parsimonious in that it considers the impact of the number of variables in their calculations, so it is a better indicator of

TABLE 2 Fit Indices

Learning objective Composite reliability Variance extracted χ2 RMSEA RMSR NNFI CFI df

Business knowledge (n=218) .853 .409 61.35∗ .077 11.66 .932 .949 27

Note: RMSEA=root mean square error of approximation; RMSR=root mean square residual; NNFI=nonnormed fit index; CFI=comparative fit index.

p

<.05.

(8)

162 S. L. AVERY ET AL.

TABLE 3 Factor Loadings

Business knowledge Oral communication

Indicators Standardized

loadings

Indicators Standardized loadings

Accounting .590∗∗ Progression .173∗

Economics .703∗∗ Conclusion –.017

Management .644∗∗ Projection .916∗

Quantitative analysis .232∗ Delivery .455∗

Finance .723∗∗ Eye contact .276∗

Marketing .737∗∗ Gestures .246∗

Law .545∗∗ Pace .975∗∗

Systems .552∗∗ Fillers .104

International .837∗∗

p

<.05,∗∗p<.001.

model fit than chi square. The NNFI and CFI was .932 and .949, well above the recommended threshold of .90. Overall, the model fit is acceptable.

In assessing construct validity of the items, we noted that all items were statistically significant, however one item mea-sure, quantitative analysis, fell below the recommended load-ing of .50. The composite reliability of .853 was well above the recommended threshold of .6 and the variance extracted was slightly below the recommended threshold at .50. Over-all there is evidence that the item measures adequately reflect the latent construct of business knowledge. However, further analysis is needed to determine the cause of the low factor loading of quantitative analysis. We provide Figure 3 as a visual representation of the CFA model for this construct.

Oral Communication

The students’ oral communication skills were measured by a rubric assessed oral presentation assignment administered

in the business communication course, which is part of the required core curriculum. The business communication pro-fessor for all sections of the class completed the assessment. There is only one business communication professor, thus en-suring consistency of the measurement process. The rubric is comprised of nine item measures (see Table 3 for a list of these items). Data was obtained from the 2010–2012 assess-ments, which resulted in 161 observations for the CFA of the oral communication construct. A review of the data found two missing observations for the item measure conclusion and one missing observation for eye contact. Because the impact of missing data was small, we used the mean imputa-tion method for the missing observaimputa-tions. We also identified one potential outlier; we deleted the case on a trail run and found that it did not have a significant impact on normality or the results. The sample size ratio (161 / 9=17.9) is in the recommended threshold range of 5–20. The multivariate kurtosis statistic of 23.393 was well above the recommended thresholds of 8, which provides evidence of a departure from normality. The univariate skewness statistics were below the threshold of 3.

Because of the nonnormal distribution, we attempted to run an asymptotically distribution-free estimation method to contact the CFA. Unfortunately this resulted in an inadmissi-ble solution because of the existence of a negative error vari-ance. Therefore we used the maximum-likelihood estimation technique, which often provides reasonable results with de-partures from normality. Theχ2was significant, however the adjusted chi-square ratio (χ2/df) was 4.453, below the rec-ommended threshold of 5. The RMSR was .033 well below the recommended threshold of .10 while the RMSEA was .145, above the recommended threshold of .08. The NNFI and CFI was .693 and .770, below the recommended thresh-old of .90. Our overall assessment is the model fit is poor. The RMSEA, NNFI, and CFI are impacted by the model complexity, which could be an indication that the number

FIGURE 3 Business knowledge confirmatory factor analysis model.

(9)

SEARCHING FOR EMPIRICAL VALIDITY 163

FIGURE 4 Oral communication confirmatory factor analysis model.

of variables in the model affected model fit. We provide Figure 4 as a visual representation of the CFA model for oral communication.

Two of the items were not statistically significant: conclu-sion and fillers. Only two of the nine items were greater than the recommended threshold of .50, projection and pace. The composite reliability of .62 met the recommended threshold of .6. The variance extracted of .246 was well below the rec-ommended threshold of .50. Our conclusion is that the item measures do not accurately reflect the latent construct oral communications.

DISCUSSION

The purpose of this article is to highlight an AACSB-accredited program as a case study of its design of an ically validated AoL system and to demonstrate how empir-ical validity improved our AoL system. When appropriate, we used confirmatory factor analysis to validate the mea-surement instruments used to assess student achievement of the learning objectives established by the faculty and stake-holders of the college. We provided a description of the CFA process used to assess the empirical validity of the learning objectives. Finally we illustrated the process by discussing the results of the validation process on two learning objec-tives.

For the business knowledge learning objective, we found that both the model fit and construct reliability are valid. However, we noted that factor loading for quantitative anal-ysis was much lower than the other item measures. In re-viewing the raw scores, not surprisingly, we found that our students do not perform as well in the quantitative

analy-sis topic when compared to the other topics covered by the MFT indicating that even though we had a valid measure of business knowledge, our students need improvement in their quantitative skills. These results prompted us to exam-ine the curriculum of the class where much of the quantitative methods are taught.

For the oral communication learning objective, we found that the model fit was poor, construct reliability was low, and many of the item measures from the rubric did not load. These results prompted us to examine the mea-surement tool used to assess achievement oral commu-nication competency. Corrective action includes a review of the rubric and also the process used to collect the data.

CONCLUSION AND LIMITATIONS

We found value in and therefore will continue to empirically validate the AoL learning objectives using CFA. The valida-tion process has increased the support of the AoL system by faculty that understand and are trained in the research pro-cess. We have received positive feedback from the AACSB and higher education associations on our validation process. Most importantly, it provides confidence in the tools we are using to measure student achievement of the learning objec-tives. Now that the process and the supporting models are in place, it will be relatively simple to continue the valida-tion process in order to continually improve. Because of the method we used to capture the assessments, we are able to use the same validation process for both AACSB and SACS accreditations.

(10)

164 S. L. AVERY ET AL.

We are continually striving to improve our validation process. One limitation is that we are unable to simulta-neously assess the empirical validity of the entire model of the learning objectives. Assessment is conducted by class, rather than individual student across all classes. Therefore, we do not have data for one student for all learning objectives. To address this limitation, we are evaluating both in-house and commercially developed databases to track student data across classes and semesters. For example, the University of Central Florida (UCF) developed an in-house database for tracking of student data (Moskal, Ellis, & Keon, 2008). Taskstream (www1.taskstream.com) is a commercially avail-able database for tracking data.

REFERENCES

Baker, D. L., Ni, A. Y., & Van Wart, M. (2012). AACSB assurance of learn-ing: Lessons learned in ethics module development.Business Education Innovation Journal,4, 19–27.

Beard, D., Schwieger , D., & Surendran, K. (2008). Integrating soft skills assessment through university, college, and programmatic efforts at an AACSB accredited institution.Journal of Information Systems,19, 229–240.

Bedeian, A. G. (2007). Even if the tower is “Ivory,” it isn’t “White”‘: Under-standing the consequences of faculty cynicism.Academy of Management Learning & Education,1, 9–32. doi:10.2307/40214514

Bentler, P. M. (1990). Comparative fit indexes in structural models. Psycho-logical Bulletin,107, 238–246.

Bentler, P. M., & Bonett, D. G. (1980). Significance tests and goodness of fit in the analysis of covariance structures.Psychological Bulletin,88, 588–606.

Betters-Reed, B. L., Nitkin, M. R., & Sampson, S. D. (2008). An assurance of learning success model: Toward closing the feedback loop.Organization Management Journal,5, 224–240.

Chou, C. P., & Bentler, P. M. (1995). Estimates and tests in structural equation modeling. In R. Hoyle (Ed.),Structural equation modeling(pp. 37–59). Thousand Oaks, CA: Sage.

Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error.Journal of Marketing Research,18, 39–50.

Gardiner, L. R., Corbitt, G., & Adams, S. J. (2010). Program assessment: Getting to a practical how-to model.Journal of Education for Business, 85, 139–144. doi:10.1080/08832320903258576

Garrett, N., Marques, J., & Dhiman, S. (2012). Assessment of business programs: A review of two models.Business Education & Accreditation, 4, 17–25.

Hair, J. F., Black, B., Babin, B., Anderson, R. E., & Tatham, R. L. (2005).Multivariate data analysis(6th ed.). Saddle River, NJ: Prentice-Hall.

Harper, J. S., & Harder, J. T. (2009). Assurance of learning in the MIS program.Decision Sciences Journal of Innovative Education,7, 489–504. doi:10.1111/j.1540-4609.2009.00229.x

Hess, P. W., & Siciliano, J. (2007). A research-based approach to continuous improvement in business education.Organization Management Journal, 4, 135–147.

Hu, L., & Bentler, P. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struc-tural Equation Modeling,6, 1–55.

Kline, R. B. (2005).Principles and practice of structural equation modeling (2nd ed.). New York, NY: Guilford.

LeClair, D. (2012). Broadening our view of assurance of learning. Retrieved from http://aacsbblogs.typepad.com/dataandresearch/2012/ 02/broadening-our-view-of-assurance-of-learning.html

Marsh, H. W., Balla, J. R., & McDonald, R. P. (1988). Goodness-of-fit indices in confirmatory factor analysis: Effects of sample size. Psycho-logical Bulletin,103, 391–411.

Martell, K. (2007). Assurance of learning (AoL) methods just have to be good enough. Journal of Education for Business, 82, 241– 243.

Moskal, P., Ellis, T., & Keon, T. (2008). Assessment in higher education and the management of student-learning data.Academy of Management Learning & Education,2, 269–278. doi:10.2307/40214542

Nunnally, J. C. (1978)Psychometric theory(2nd ed.). New York, NY: McGraw-Hill.

Pesta, B., & Scherer, R. (2011). The assurance of learning tool as pre-dictor and criterion in business school admissions decisions: New use for an old standard?Journal Of Education For Business,86, 163–170. doi:10.1080/08832323.2010.492051

Romero, E. J. (2008). AACSB accreditation: Addressing faculty concerns. Academy of Management Learning & Education,7, 245–255.

Sampson, S. D., & Betters-Reed, B. L. (2008). Assurance of Learning and outcomes assessment: A case study of assessment of a marketing curriculum.Marketing Education Review,18, 25–36.

Steiger, J. H. (1990). Structural model evaluation and modification: An interval estimation approach.Multivariate Behavioral Research,25, 173– 180.

Weber, P., Weber, J. E., Sleeper, B. J., & Schneider, K. C. (2004). Self-efficacy toward service, civic participation and the business student: Scale development and validation.Journal of Business Ethics,49, 359– 369.

Wilhelm, W. J., & Czyzewski, A. B. (2012). Ethical reasoning instruction in undergraduate cost accounting: A non-intrusive approach.Academy of Educational Leadership Journal,16, 131–142.

Zocco, D. (2011). A recursive process model for AACSB assurance of learning.Academy of Educational Leadership Journal,15, 67– 91.

Gambar

FIGURE 1Assurance of learning curriculum management process at the University of Texas at Tyler (color figure available online).
FIGURE 2Conceptual framework of bachelor in business administration program at the University of Texas at Tyler (color figure available online).
TABLE 1
TABLE 2
+3

Referensi

Garis besar

Dokumen terkait

[r]

Jenis penelitian ini adalah korelasional. Responden penelitian berjumlah 86 orang yang ditentukan dengan teknik incidental. Instrument yang digunakan berbentuk model skala likert

Metode pembelajaran konvensional yang biasa diterapkan dikelas memiliki keterbatasan, yaitu kesulitan dalam menjelaskan hal-hal yang bersifat abstrak, interaksi lebih cenderung

[r]

Dibuat oleh : Dilarang memperbanyak sebagian atau seluruh isi dokumen tanpa ijin tertulis dari Fakultas Ilmu Pendidikan.. Universitas

Berdasarkan Hasil Evaluasi Kualifikasi tersebut di atas, maka Kelompok Kerja I Unit Layanan Pemerintah Kabupaten Mandailing Natal berkesimpulan sebagai berikut : Mengacu

Oemikian Pengumuman ini, unluk diketahui oleh seluruh peserta seleksi sederhana. Muhammad Zainal

Dengan melihat permasalahan diatas, penulis ingin membuat aplikasi e- Learning berbasis web yang interaktif dengan fasilitas komunikasi blog serta menyediakan media