• Tidak ada hasil yang ditemukan

Manajemen | Fakultas Ekonomi Universitas Maritim Raja Ali Haji joeb.79.6.333-338

N/A
N/A
Protected

Academic year: 2017

Membagikan "Manajemen | Fakultas Ekonomi Universitas Maritim Raja Ali Haji joeb.79.6.333-338"

Copied!
7
0
0

Teks penuh

(1)

Full Terms & Conditions of access and use can be found at

http://www.tandfonline.com/action/journalInformation?journalCode=vjeb20

Journal of Education for Business

ISSN: 0883-2323 (Print) 1940-3356 (Online) Journal homepage: http://www.tandfonline.com/loi/vjeb20

A Study of Teaching and Testing Strategies for

a Required Statistics Course for Undergraduate

Business Students

John A. Lawrence & Ram P. Singhania

To cite this article: John A. Lawrence & Ram P. Singhania (2004) A Study of Teaching and Testing Strategies for a Required Statistics Course for Undergraduate Business Students, Journal of Education for Business, 79:6, 333-338, DOI: 10.3200/JOEB.79.6.333-338

To link to this article: http://dx.doi.org/10.3200/JOEB.79.6.333-338

Published online: 07 Aug 2010.

Submit your article to this journal

Article views: 18

View related articles

(2)

olleges and universities continu-ally must try to improve the design, teaching methodologies, and testing strategies of their courses. The higher education system is being chal-lenged to provide increased educational opportunities without increased bud-gets. Advances in information technol-ogy fortunately have brought many innovative alternatives to both teaching and testing methods. Some of the most important factors have been the devel-opment, availability, and increasing popularity of the Internet. Advances in hardware technology, software pro-grams, and presentation software are just some of the factors that have affect-ed the delivery of university courses inside and outside of the classrooms. In addition, according to a Department of Education study published in July 1997 in USA Today, an increasing nontradi-tional university population finds itself as “bargain hunting, time-strapped shoppers who value convenience and flexibility over prestige” (“Tax breaks will make . . . ,” p. 14A). The capabili-ties and quality of online courses, course costs, and convenience have led to a situation in which teaching and test-ing approaches in the same course vary significantly, not only from instructor to instructor but also from section to sec-tion taught by the same instructor through varying delivery modes.

In this article, we report on the results of a comparison of alternative methods of teaching and testing in a required statis-tics course for undergraduate business students at California State University, Fullerton (CSUF). This course is similar to undergraduate courses required at almost all AACSB-accredited business schools. At CSUF each semester more than 1,000 students are enrolled in more than 25 different sections taught by more than a dozen full- and part-time faculty members. Although all instructors are required to use the same text, they can differ in their choice of delivery mode, emphasis on Excel approaches, extent of computer laboratory time, applied rela-tionship between theory and practice and

between hand calculation and computer use, and type of testing. With such a vari-ety of instructional approaches, the school administrators, course coordina-tor, and even the individual instructors can become concerned about maintaining quality and consistency in instruction.

Traditional Versus Distance-Learning Performance

Driven by the widespread availability and increasing popularity of the Internet, as well as by intense competition for stu-dents, distance learning has become a popular delivery mode for all types of university courses and programs. According to the National Center for Education Statistics, by 1998 more than 44% of all higher education institutions offered distance-learning courses, an increase of more than one third com-pared with just 3 years earlier (Finn, 1998). Lawrence (2003) found that there were more than 1.3 million total enroll-ments in over 50,000 distance-learning course offerings. In June 2000, the National Education Association reported that more than 90% of its members were at institutions offering or considering offering distance-learning courses and that more than 10% of its members already had taught at least one course online. Now, virtually every major American university offers these courses.

A Study of Teaching and Testing

Strategies for a Required

Statistics Course for

Undergraduate Business Students

JOHN A. LAWRENCE RAM P. SINGHANIA

California State University Fullerton, California

C

ABSTRACT. In this investigation of student performance in introductory business statistics classes, the authors performed two separate controlled studies to compare performance in (a) distance-learning versus traditionally-delivered courses and (b) multiple-choice versus problem-solving tests. Results of the first study, based on the authors’ several semesters of experi-ence teaching the course in both distance-learning and traditional for-mats, show that the distance-learning students did not fare as well as those taking the course in the traditional for-mat. The results of the second study, in which a common set of students took both multiple-choice and written exams in the same semester, showed no significant difference in performance.

(3)

Distance-learning courses reach a broad-er student audience and have the poten-tial to address students’ needs better at significantly lower costs. Further, as there is little evidence that any of the fac-tors favoring the increased popularity of distance-learning courses will reverse course in the near future, it is relatively safe to conclude that nothing will alter the increased acceptance of these cours-es. Indeed, in June 2001 the National Governors Association, although citing the need to oversee the quality of such courses, nonetheless enthusiastically endorsed expanded distance-learning opportunities (Lawrence, 2003).

As distance learning gains an increas-ingly wider audience, many educators are concerned about how learning in the dis-tance-learning courses compares with that in traditional courses. Distance learn-ing is still in its infancy, and there are numerous ways to deliver a distance-learning course (Phillips, 2001). Delivery platforms such as Blackboard and Web-CT have narrowed the approaches to some extent, but there are still a plethora of distance-learning approaches on the landscape. At their annual meetings, pro-fessional educational organizations such as the Institute for Operations Research and Management now regularly schedule sessions devoted to the teaching of statis-tics and other quantitative subjects via distance learning.

Despite such a growing research interest in distance education, “there is a relative paucity of true, original research dedicated to explaining or pre-dicting phenomena related to distance learning” (Phipps & Merisotis, 1999). Thus, although some researchers have presented arguments against distance learning, others have concluded that dis-tance learning compares favorably with classroom instruction. Some have even argued that distance education does not merit the granting of degrees, whereas others have indicated that students undergoing distance education have learning outcomes comparable to, if not better than, students in traditional class-room settings.

In our first study, we compared stu-dent performance in introductory statis-tics courses delivered both by distance learning and in a traditional teaching style. We decided to look at results from

many different perspectives. We com-pared test scores from all distance-learning and traditional students taking the courses. Then we compared the aver-age final grades of students in both courses. Students who do not complete the course either withdraw early enough and receive a “W” or drop out late in the semester (usually because it is unlikely that they will receive a passing grade) and receive a WU (which is equivalent to an “F”). In a third comparison, we com-pared the average course grades of stu-dents who finished the course with the corresponding grades of students who received a WU. Finally, we compared the percentage of students who received a WU with the percentage who dropped the course (W or WU) for any reason.

We evaluated the following five hypotheses:

H1: Students who take the traditional course will have higher average test scores than distance-learning students who take the equivalent course.

H2: The average course grade of stu-dents who finish a traditional course will be higher than that of students who finish the equivalent distance-learning course.

H3: The average course grade of stu-dents who finish or receive a WU in a traditional course will be higher than that of students who finish or receive a WU in the equivalent distance-learning course.

H4: The percentage of students tak-ing a WU will be greater in the distance-learning course than in the equivalent traditional course.

H5: The percentage of students tak-ing a W or a WU will be greater in the distance-learning course than in the equivalent traditional course.

Method

The traditional course is taught in a computer laboratory through a combi-nation of instructor-generated Power-Point slides and traditional whiteboard lecturing approaches. The course is heavily Excel based. In the traditional course, students are given a small amount of class time in the laboratory to master these Excel concepts.

There is little skimping on theory, but

with the exception of a few very simple problems on exams that test whether stu-dents can perform hand calculations, the exams are computer based with large data sets. Emphasis is on problem for-mulation, manipulating data sets, reading and interpreting Excel generated output, and drawing appropriate business con-clusions. The course includes one Excel computer project, which typically has the student use Internet databases to track a stock over a specified period of time and use regression models to draw conclu-sions about the stock’s performance compared with other financial indicators. The distance-learning course tackles the same concepts in a similar manner. The only difference is that instead of receiving face-to-face instruction, stu-dents listen to approximately 40 prere-corded PowerPoint modules narrated by the instructor to simulate face-to-face lecture material. Each module lasts between 10 to 40 minutes and is replete with dynamic motion and narration. Students also may download nonaudio print versions of the slides and take notes during the narration. Embedded in these slides are the theory, applications, hand calculations, and the Excel approach to statistics problem solving. The only required meetings are for the exams, although periodic chat sessions and optional in-class review sessions are scheduled before each exam.

Because of the difficulty that some students had accessing the narrated PowerPoint lectures over the Internet, we provided distance-learning students with a CD with the same lectures as those on the Internet. Students taking the course in the traditional manner had exactly the same access to the Web site, had the same project and homework, and took the same exams. Although they also could listen to the lectures over the Internet, they were not provided with the CD.

From spring 2001 through spring 2003, a 7-semester period that includes one summer session and one interses-sion, we offered the course four times in both formats, twice in a distance-learning format only, and once in the tra-ditional format only. The fact that we offered the course in both traditional and distance-learning formats in 4 different semesters allowed for paired t-test

(4)

parisons. We discuss our results from these paired comparisons and the overall performance using all 11 sections (the 6 semesters in which we taught the course in a distance-learning format and the 5 semesters in which we taught it in the tra-ditional format) using two-sample ttests with equal variances (the data passed the equal variances tests).

Results

We obtained the following results in our first study:

H1: There was strong evidence that the traditional students outperformed the distance-learning students on tests (for the paired data,p= .0018; when we considered all data, p = .0036). The average difference in test scores using the paired data was 6.557 with a margin of error of 4.011. When we used all data, the average difference was 6.001 with a margin of error of 4.276.

H2: There was also strong evidence that the average grade of students com-pleting the class given in the traditional format would be higher than the average grade of students completing the equiv-alent distance-learning course (for paired data for 4 semesters, p= .0097; when we included all 11 courses, p = .0130). When we used only the paired data, the average difference in the aver-age grade (based on the usual 4-point scale) was .250 with a margin of error of .173; when we used all 11 courses, the average difference was .259 with a mar-gin of error of .220.

H3: For those either completing the course or receiving a WU, there was moderate evidence based on paired semester data (pvalue = .0785) that the average grade of those taking the tradi-tional course was higher than the aver-age grade of those taking the distance-learning course. There was significant evidence of this same result when we used all semester data (p= .0184). The average difference in the average grade (again based on the usual 4-point scale) was about .6 of a point with a margin of error of between .55 and 1.1.

H4: There was moderate evidence that the percentage of students who did not finish the class and who received a WU (rather than an F) was greater in the distance-learning courses than in the

(appropriately dropping the course before the university-sanctioned drop date) was greater in the distance-learning courses than in the traditional courses (p= .0551). The average differ-ence based on only paired semesters was 24.4% with a margin of error of ± 34.6%. When we used all 11 courses, there was significant evidence of such a difference (p= .0240); the average dif-ference between distance-learning and traditional courses was 19.5% with a margin of error of 19.3%.

Conclusions

According to the results of our first study, the distance-learning students did not fare as well as those taking the same course in a traditional format. We would hope to be able to conclude, however, that the gap in such performance differ-ences narrowed over time. Trend time-lines show that the difference in overall performance between the two groups narrowed over time, the average grade in distance-learning classes increased, and the percentage of students receiving Ws and or WUs in distance-learning classes decreased. However, none of these trend estimates could be found to be statistically different from 0.

a classroom lecture format. There are various link aids including sample tests, answered homework, Excel files, and so forth, but the student must spend time listening to the recorded lectures and navigating the Web site. In short, the distance-learning student must exhibit more discipline in this type of distance-learning course than he or she would in others.

2. When we first offered the distance-learning course, the recorded lectures were accessed solely from the Web site. The first distance-learning class was given in the spring 2001 semester. At that time, many students had access to only 28.8K modems. Even with 56K modems, many students expe-rienced severe buffering, causing prob-lems with their ability to hear the lec-tures from home effectively. Those with this problem had to come to campus to listen to the lectures (over T1 lines), negating much of the benefit of taking a distance-learning class. After the first semester in which the course was given, we distributed CDs with the recorded lectures to the distance-learning stu-dents. Students taking the course in the traditional mode had access to the dis-tance-learning recorded lectures but not to the CD. In the brief period since traditional ones (for paired data for 4

semesters,p= .0872; when we used all 11 courses,p= .0339). The average dif-ference in these rates based on only the 4 paired semesters was 18.5%, but with a margin of error of ± 33.2%. When we used all 11 courses, the average differ-ence between the two groups was 15.7% with a margin of error of 17.1%.

H5: There was moderate evidence based on the paired data that the per-centage of students who did not finish the class and received a WU or a W

Possible Confounding Factors

Although this study ruled out instruc-tor performance as a confounding fac-tor, several other factors could have affected the outcome.

1. This distance-learning course dif-fers from most other distance-learning statistics courses. It is not a short-answer, multiple-choice, Blackboard-style distance-learning course. Instead, lectures are recorded and we use dynamic PowerPoint slides to simulate

Trend timelines show that the difference in overall

performance between the two groups narrowed

over time, the average grade in distance-learning

classes increased, and the percentage of students

[dropping the] distance-learning classes

decreased.

(5)

spring 2001, many more students now have gained access to DSL and cable modems and even T1 lines at work.

3. Until the fall 2003 semester—that is, for the entire length of this study— we identified distance-learning Web courses in the class schedule only by a superscript symbol next to the course. Students then had to find the list of sym-bols to ascertain whether a particular course was a Web course. Consequently, about half of the students did not realize that they had signed up for a distance-learning course; this situation occurred even in the most recent offering of the distance-learning course. Furthermore, although there were perhaps 25 other sections of the course, demand for the course still exceeded the total number of seats in all sections. Hence many stu-dents took the distance-learning course even though they would have preferred to take a traditional course.

4. We administered the distance-learning course through the main cam-pus at California State University, Fullerton. Although the course was nom-inally scheduled in the evenings (with the exception of the summer course), the distance-course class had the same char-acteristics as the student population tak-ing the traditional courses. With few exceptions, this population was com-prised of students in their early 20s. The traditional course was given at Califor-nia State University, Fullerton’s South County branch campus (located at Mis-sion Viejo until the fall 2002 semester, at which time it relocated to El Toro). Stu-dents in these courses usually are older (their approximate average age is 26) and more affluent than the traditional students, and one could assume that they value education more. They typically have full-time jobs, and more of them tend to have families compared with stu-dents at the main campus. Although we do not have evidence demonstrating a difference between these two sets of stu-dents, these are factors to consider.

5. The distance-learning course is evolving constantly. The instructor is in an exponential learning mode and still considering what does and does not work. More and better information is added to the Web site each semester.

6. In the last 2 years, many more stu-dents have been exposed to other Web

courses. As students become more com-fortable taking Web courses, their per-formance should improve.

Problem-Solving Versus Multiple-Choice Testing

In our second study, we compared student performance on traditional problem-solving tests with that on multiple-choice tests. A primary reason for making this comparison was an avalanche of student complaints about how hard the written tests were in this course. In fact, during the 5 most recent semesters that we taught the statistics course, the most negative complaint by far on our student evaluations concerned the perceived difficulty of the tests. A second reason for making this compari-son was a prevailing notion that some-how students can guess their way through multiple-choice tests (leading some to refer to them as multiple-guess tests). Although such a comparison has been done in the past, in this study the same students (taught by the same instructor) took both kinds of test, which eliminated the bias of instructor differences.

In this study, we were interested not only in comparing the overall differ-ences between student performance on problem-solving tests and that on multi-ple-choice exams, but also in investigat-ing whether there was a learninvestigat-ing curve of improvement. We evaluated the fol-lowing three hypotheses:

H6: Students will perform better on average on the first problem-solving exam than on the first multiple-choice exam.

H7: Students will perform better on average on the second multiple-choice exam than on the second problem-solving exam.

H8: The overall average test grade received on problem-solving tests will differ from that received on multiple-choice exams.

Method

During spring 2003, the statistics course that we used as the basis of this study was taught in the classroom, with an emphasis on mastering the concepts

in a traditional manner. We used Excel to solve several problems and a portion of class time to interpret Excel printouts (particularly for hypothesis tests and regression analyses), but most problems were solved by hand and/or calculator. We gave classes and tests in a tradi-tional classroom setting. Using this teaching paradigm, we gave both tradi-tional and multiple-choice exams to the same students, allowing for a testing/ learning outcomes comparison of these two approaches to test taking.

We graded the course primarily on four examinations. The first examina-tion, which covered basic descriptive statistics, probability theory, and dis-crete probability distributions, was a tra-ditional problem-solving test. The sec-ond examination, which covered continuous probability distributions, sampling distributions, estimation, and hypothesis testing, was a multiple-choice test with each question having four choices including the correct answer. The third examination, which covered regression analyses, multiple regression, analysis of variance, and chi-square tests for multinomial distrib-ution and contingency tables, was again a problem-solving test, whereas the the comprehensive final examination was again a multiple-choice exam.

Students could use correct answers for multiple-choice exam questions as clues to solve the problem correctly. However, this approach would have been successful only if the student had a basic understanding of the concepts; the student could not simply guess the cor-rect answers. Thus, many of the wrong answers were generated by students who did not know all the steps for solv-ing the problem. All tests were similar in terms of complexity and the number of questions, and students were told in advance which type of test to expect.

Exactly 100 students completed this statistics course with its alternating exam pattern. This format allows for pairwise t

test comparisons. Using pairwise ttests, we compared the grades on the first problem-solving exam with those on the first multiple-choice exam, the grades on the second problem-solving exam with those on the second multiple-choice exam, and the overall average grade for the two problem-solving exams with the

(6)

overall average grade for the two multiple-choice exams.

Results

We obtained the following results for the three hypotheses formulated for our second study:

H6: We found strong evidence (p = .0008) that the students, given no previ-ous familiarity with the instructors’ test-ing, fared significantly better on the first problem-solving exam than on the first multiple-choice exam. The average dif-ference in test scores between the first problem-solving and the first multiple-choice exam was 6.20 points with a margin of error of ± 3.78 points.

H7: We found strong evidence (p = .0096) to conclude that the students, given the experience of one problem-solving and one multiple-choice exam, now scored significantly better on the second multiple-choice exam than on the second problem-solving exam. The average difference in test scores between the second multiple-choice exam and the second problem-solving exam was 4.17 points with a margin of error of ± 2.34 points.

H8: However, when we compared the combined test results for both problem-solving tests with the combined test results for both multiple-choice tests, we could find no significant difference in student performance (p = .3990). Based on the paired data, the average difference in test scores between problem-solving and multiple-choice exams was only 1.01 points, and the margin of error was ± 2.38 points.

Possible Confounding Factors

Although the experimental design of this study removed the instructor and the students as possible confounding factors, the following factors could have affected our results:

1. Although we took care to assign the same number of problems with sim-ilar complexity on all tests, there actu-ally could have been some differences in the complexity of the tests. Student perception about the complexity, rather than actual complexity, also could have been a confounding factor.

2. The subject matter in this course lends itself to varying degrees of diffi-culty. Some material on the first test actually may have been learned in pre-vious college probability courses or even in high school. As the course pro-gresses, there is less likelihood of the students having been exposed previ-ously to the material.

3. As the course progresses, students become more familiar with the format and wording of the tests. This learning curve may have allowed the students to be more at ease when taking tests. On the other hand, students who performed poorly on early exams may have experi-enced increased tension instead and may have become “psyched out” when taking subsequent tests.

These are all factors worth exploring in future studies.

Student Perceptions Based on Student Evaluations

Even though most academics have raised some concerns about using stu-dent evaluations to measure the perfor-mance and quality of instruction, research studies have found that such evaluations can be reliable and valid. In an exhaustive study, Cohen (1981) con-cluded that the overall correlation between instructor ratings and student achievement was .43 and that the over-all correlation between course ratings and student achievement was .47. How-ever, many studies also indicate that stu-dent evaluations of faculty members should not be compared across disci-plines and course levels. Instructor eval-uations also can be influenced by the faculty member’s gender (Bachen, McLoughlin, & Garcia, 1999; Basow, 1995). Thus, we feel that an investiga-tion of quality of instrucinvestiga-tion should include a comparison of student evalua-tions but that researchers should try to make sure that any bias is removed.

Overall instructor ratings in these studies were based on responses to a seven-question evaluation conducted at the end of the semester in which this study was conducted and the previous semester. Students used a 4-point scale to rate the instructor’s ability to commu-nicate, preparation for the class, and willingness to help along with the exam

covering the subject matter, class and project assignments, and overall instructor effectiveness.

In the first study, both the traditional and the distance-learning statistics classes were delivered by the same instructor. This instructor constantly received extremely high student evaluations from students, much higher than department norms. In fact, his average student evalu-ation grade for the six distance-learning classes in this study was 3.42 (out of pos-sible 4), and his average student evalua-tion grade in the five tradievalua-tional classes was 3.45. The students viewed the instructor equally in both formats; thus, instructor performance should not be viewed as a confounding factor. In fact the pvalue for the paired comparison test for unequal average differences in student evaluations was .71, and the pvalue for the paired comparison test for differences in student evaluations for semesters in which the course was taught in both for-mats was .30. These values support the conjecture that instructor performance was not a confounding factor. In general, the correlation between student grade point average and the instructor’s student evaluation average was only .42.

In the second study, we compared the results from the student evaluations for the given semester with those from the student evaluations for the same course given by the same instructor in the pre-vious semester. The courses were taught in exactly the same manner in both semesters, covered exactly the same material, were given at approximately the same times and days of the week, and were delivered in the same set of classrooms to the same types of stu-dents. The only significant differing fac-tor was that in the latter semester the instructor gave two multiple-choice and two problem-solving exams, as opposed to only problem-solving exams, which he gave in the immediately preceding semester. The difference in the results was dramatic. A two-sample ttest for the hypothesis that there was an increase in the overall student evaluation scores yielded strong evidence (p= 1.63 x 1018)

that students preferred the instruction when multiple-choice tests were substi-tuted for problem-solving exams. This result might lead one to infer that stu-dents feel that multiple-choice tests are

(7)

easier to take and thus might allow them to be more relaxed about the class in general. Because the difficulty of written tests seems to be an overwhelming con-cern for students, one might conclude that offering multiple-choice tests may lead to a more positive learning experi-ence (and higher evaluations for the instruction) in the classroom.

Overall Conclusions

We can draw several conclusions based on these two studies. The first is that distance-learning students do not fare as well as those taking the same course in the traditional format. Al-though the trend timeline appeared to show that the gap representing the dif-ference in overall course performance between traditional and distance-learning students narrowed over time, with the average grade in distance-learning classes increasing and the per-centage of students receiving Ws and or WUs in distance-learning classes decreasing, none of these trend esti-mates could be found to be statistically different from 0. However, according to

student evaluations, students’ percep-tions of distance-learning classes were as favorable as their perceptions of tra-ditional classes.

The results of these studies also show that there were variations in the test scores and the average student perfor-mance from test to test according to the type of exam. There may be many theo-ries about the variations in the two types of test (involving the complexity of the tests, students’ familiarity with the for-mat and language of the tests, their feel-ing tense or befeel-ing psyched out, etc.), but the data for the combined samples showed that the test scores were not sig-nificantly different between the written and multiple-choice tests. However, according to the analysis of student evaluations, students seem to prefer a class in which at least some of the exams are multiple choice. One percep-tion that both students and faculty mem-bers might share is that multiple-choice tests are somehow easier. This can be viewed positively, as students may feel more relaxed when taking multiple-choice tests and have more positive learning experiences in the classroom.

REFERENCES

Basow, S. A. (1995). Student evaluations of col-lege professors: When gender matters. Journal of Education Psychology, 87(4), 656–665. Bachen, C. M., McLouglin, M. M., & Garcia, S.

(1999). Assessing the role of gender in college students’ evaluation of faculty. Communication Education, 48(3), 193–210.

Cohen, P. A. (1981). Students ratings of instruc-tion and student achievement: A meta-analysis of multi-section validity studies. Review of Educational Research, 51(3), 281–309. Finn, C. E., Jr. (1998). Today’s academic market

requires a new taxonomy of colleges. Chronicle of Higher Education, XLV(1).

Lawrence, J. A. (2003). A distance learning approach to teaching management science and statistics. International Transactions in Opera-tional Research, 10,1–13.

National Education Association. (2000). A survey of traditional and distance learning in higher education. Washington, DC: National Educa-tion AssociaEduca-tion.

Phillips, V. (2001). TheVirtual University Gazette’s

FAQ on distance learning, accreditation, and other college degrees.Retrieved November 9, 2001 from http://www.geteducated.com/ articles/dlfaq

Phipps, R., & Merisotis, J. (1999). What is the dif-ference? A review of contemporary research on the effectiveness of distance learning in higher education.Washington, DC: Institute for High-er Education Policy.

Tax breaks will make higher education more accessible. (1997, July 14). USA Today,p. 14A.

Referensi

Dokumen terkait

Berdasarkan Surat Penetapan Pemenang Nomor : 10/ULP/BPMPD/LS-MD/2012 tanggal 5 Juni 2012, dengan ini kami Pokja Konstruksi pada Badan Pemberdayaan Masyarakat dan

48/VII Pelawan II pada Dinas Pendidikan Kabupaten Sarolangun Tahun Anggaran 2012 , dengan ini diumumkan bahwa

Mengingat sebuah organisasi nirlaba (OPZ) tanpa menghasilkan dana maka tidak ada sumber dana yang dihasilkan. Sehingga apabila sumber daya sudah tidak ada maka

Berdasarkan Surat Penetapan Pemenang Nomor : 44.i /POKJA /ESDM-SRL/2012 tanggal 15 Agustus 2012, dengan ini kami Pokja Konstruksi pada Dinas ESDM Kabupaten

[r]

RKB Ponpes Salapul Muhajirin Desa Bukit Murau pada Dinas Pendidikan Kabupaten Sarolangun Tahun Anggaran 2012, dengan ini diumumkan bahwa :.. CALON

Bertitik tolak dari latar belakang pemikiran tersebut di atas, maka masalah yang sangat pundamental diteliti dan dibahas dalam rangkaian kegiatan penelitian ini

[r]