Assessing Distance Education Courses and Discipline Differences in their Effectiveness – 1

Assessing Distance Education Courses and Discipline Differences in their Effectiveness – 1 – Statistical Data Included

Paula Szulc Dominguez

This research illustrated a new, parsimonious model that investigators interested in distance education can use to ask meaningful questions about the relative quality of distance education courses (Dominguez & Ridley, 1999). The approach removed the emphasis from student-level data and placed it upon course-based data. Sample data comparing online and traditional higher education courses covering nine disciplines were reported. These data revealed that preparation for advanced courses was statististically equivalent whether the course prerequisites were online courses or their traditional classroom counterparts. The article further explored the usefulness of this framework for identifying a significant discipline-related difference in the relative effectiveness of online and traditional prerequisites as preparation for advanced courses.

In this article, we have further explored an alternative framework for assessing distance education courses (Dominguez & Ridley, 1999). The article reviews the rationale for our new approach and presents a new analysis with updated data to demonstrate an application beyond our earlier presentation. The new application explores an apparent departmental difference found in the new analysis.


Student performance in the here and now of a distance education course is typically at the center of assessment of their effectiveness. That is, investigators usually attend to how well students score on tests, exams, and assignments within the context of the distance education course itself. If comparisons with student performance in traditional classroom settings are to be made, they generally involve courses taken contemporaneously with the distance education course. In this way, assessments of distance education programs have amassed results that resemble a series of snapshots looking at the program and student performance on a semester-by-semester basis.

Using the student-level data within a particular time frame, institutions, distance education programs, and individual faculty have created a detailed portrait of distance education students and have established the comparability of student learning between distance education and traditional settings. This is a good beginning. Now that institutions have overcome the initial hurdles of establishing the first-generation distance education programs, the need arises for more elaborate, action-oriented information.

Focusing on student-level data tells only a limited tale. For example, generating a profile of the “successful” distance education student does not provide institutions with practical information for program improvement or refinement. What can an institution do with this piece of data? It is anathema in higher education to deny students entry into a course based on their demographic profile. Indeed, pushing the envelope of students’ abilities is at the core of instruction. Neither does the information really help out individual faculty members interested in improving distance education students’ performance. Faculty members simply do not have the power to age a student five years, produce several offspring for them, or boost their G.P.A. half a point. What else can be done to provide institutions and faculty members with action-oriented information about the quality of instruction found in distance education programs compared with instruction in on-campus classes?

We propose a two-pronged shift in distance education investigations. The first shift removes the emphasis on distance education students and places it on the course itself. The second expands the scope of investigations to include distance education students’ subsequent performance in other classes. Using these parameters, such an assessment would question how well distance education courses prepare students for further study. Moreover, such an approach would allow institutions to compare student preparation in distance education settings versus their preparation in traditional education settings. This is very useful information for institutions that are expanding their distance education offerings. This assessment model has been used successfully to consider transfer between community colleges and four-year institutions (Quanty, Dixon, & Ridley, 1998).


The data for this study were obtained from Christopher Newport University (CNU), a state-supported institution in Virginia that has offered online courses since the early 1990s. It is important to note that, unlike the situation at some other institutions, there really is not a distinct “online student” group at CNU. Instead, online students at CNU are subsumed in the larger, traditional student population. Most of the online students typically take a combination of online and traditional courses.

Every online course at CNU has an on-campus counterpart, which offers the opportunity for comparison. Many online courses are 100- and 200-level courses that act as prerequisites for more advanced courses, some of which are also offered online and others through traditional means. For the purpose of this study, “traditional” forms of prerequisites included on-campus courses, courses transferred in from another college or a community college, or credit given for performance on an examination. Using this information, we can describe four pathways to enrollment in advanced courses that are available to CNU students.

Students enrolled in traditionally offered advanced courses may have taken the prerequisite course either through traditional means or through the online program. Alternatively (but, in practice, much less frequently), students can enroll in an online version of an advanced course after having taken the prerequisite in either an online or a traditional form. In our research, we focused our attention on the former scenario, and considered how well students perform in a traditionally offered advanced course based on the kind of prerequisite involved. A single question guided our study: Do online courses prepare students for advanced study as well as traditionally accepted forms of prerequisites?

To answer our question, we began by examining the enrollment records from six departments at CNU that offered a majority of the lower level online courses. We reviewed all online courses offered between fall 1994 and fall 1999 that served as prerequisites for other courses. We traced all the online students’ course of study after their participation in the lower level online course to see whether they went on to enroll in a traditional advanced course. Using this approach, in our current update we located a total of 50 enrollments for which the online course acted as a prerequisite for a traditionally offered advanced course.

We defined student success in the advanced course as obtaining a final grade of C or higher. To determine whether the online courses had prepared students as well as the traditional prerequisite courses, we compared the final grades of the 50 enrollments with the grades of their classmates in the advanced courses.

After the initial study, we entertained the hypothesis that one reason for a finding of no overall difference between course delivery formats might be that the effectiveness of online instruction varies with the department or discipline being studied. In effect, we proposed that there might be a statistical interaction between the discipline and the relative effectiveness of online and traditional forms of course delivery. To our knowledge, although such a claim is not unknown, it has not been tested as a hypothesis within a course-based approach.

The method we used to test the latter hypothesis was based upon first inspecting the unsuccessful online enrollments and noting the departments that offered those courses. Finding that there was only one such department (Management), we decided to examine the overall results broken down by discipline. We proposed to examine whether there was a significant influence that could be attributed to disciplines and perhaps largely to one discipline, Such a finding might suggest that the relative effectiveness of online versus traditional courses, as prerequisites, differed in that discipline compared to others.


The results are presented as frequencies of enrollments in each of the cells within the first scenario described above (Table 1).(2) We present the numbers of enrollments in each of two levels of success in the target course (A-C or “successful” and D & F or “unsuccessful”).

Table 1

Final grades in advanced course by format of prerequisite course

Advanced Course Grades

Prerequisite Format A-C D & F Percent Successful

Traditional 729 99 88.0

Online 47 3 94.0

Using the Fisher’s Exact Test of significance, we found that a probability of .09 would be associated with this distribution. While there was no statistically significant difference in the students’ final grades, the question of a statistical interaction still must be tested. That is, the findings so far only speak to the statistical main effect, which is due to the instructional delivery method of the prerequisite course. Still remaining to be tested is the statistical interaction between method and discipline or major of the target course.

It is instructive to examine the relative advantage in the success rates for online and traditional. Such examination can show the difference between Management and the aggregated non-Management course enrollments. To demonstrate, if we calculate the relative advantage, using the percent of online successes minus the percent of traditional successes, we obtain quite different results for the two categories above. So measured, the relative advantage for non-Management course enrollments would be positive: 100.0 -86.6 = 13.4. The relative advantage for Management enrollments is negative: 62.5 – 92.3 = -29.8. The difference between these two measures is 43.2 out of a theoretically possible range of 200 (+100 to -100).

However, the above inspection of success rates is only suggestive since it does not include a statistical test. In addition, it combines all the disciplines that fall outside of Management into one “non-Management” category, which begs the question of differences, if any, among those disciplines in success rates relative to online versus traditional prerequisites. To test for an interaction between target discipline and prerequisite format (online vs. traditional), we examined the distributions of all disciplines, including Management as only one of nine disciplines (Table 2).

Table 2

Tabular results from Table 1 divided into Management and other

disciplines of advanced courses

Online Online Offline Offline

Discipline Pass Fail Pass Fail

Computer Science 3 0 17 0

Economics 5 0 37 4

Education 2 0 23 1

English 5 0 60 13

Government 7 0 67 6

Management 5 3 192 16

Philosophy 3 0 9 1

Psychology 7 0 163 27

Spanish 10 0 161 31

Total 47 3 729 99

We performed two chi-squared analyses of the distributions of online passes and failures across all disciplines. These distributions are in the first two columns of Table 2. The first analysis was a straightforward calculation that determined expected frequencies of passes and failures internally, that is, from the online passes and failures only. The result was an [chi square] of 16.76 (df = 8, p [is less than] .05). However, this method assumed equal grading standards across all disciplines, an assumption that appears questionable. Therefore, a second method was used that set the expected frequencies proportionately with the rates of passes and failures in each discipline found in offline courses. This external method found a 2 of 15.77 (df = 8, p [is less than] .05). The contribution of Management to the 2 in the two analyses was 79% and 58% respectively.


Based on the finding that there was no significant difference in students’ final grades, we conclude that the online courses in the sample prepared students for advanced study at least as well as the traditionally accepted forms of prerequisites. However, a finding of an exact p-value of .09, favoring online enrollments, encouraged the further examination of a possible interaction between course type (Management and non-Management) and the relative success of online versus traditional prerequisites. This approach proved fruitful. Although the numbers of Management enrollments in online courses was small, the results suggested that the relative advantage of online is negative for Management enrollments and positive for non-Management courses. The two tests of significance vis-h-vis course and prerequisite types also supported the conclusion that discipline makes a difference in the effectiveness of online versus traditional course prerequisites. Most of that difference is due to Management.

However, a significant difference that divides along departmental lines needs to be explained. One possible approach is to conclude that online instruction simply is not as effective in the discipline represented by the department. There is a popular notion that the content and expectations of some courses do not align well with the format of text-based communication. Another approach is to suggest that a kind of instructor expectation or Pygmalion effect has occurred in the form of a subtle bias in the grading of students whose course backgrounds included work done online. Such a bias might be expected to surface in the gray areas represented by marginal students.

The two explanations just offered may be complementary, since a format that is less favorable for a particular course or course type might also influence the expectations of instructors. For example, if an online course provided less effective preparation for a target course in the past, teachers might come to expect that result; their expectations might magnify the difference that already existed before the expectation took hold. Other explanatory possibilities might exist as well. This article provides a first step in suggesting that discipline-related differences may really exist. Greater understanding of the phenomenon awaits future study.

What is immediately apparent from the approach we employed is the practical information it provides. If a difference had been found in student performance that related to the nature of the prerequisite, institutions would be able to take a number of steps. For example, if it were shown that students taking online courses as a prerequisite were at a disadvantage, then departments and faculty members would be able to reconsider the format of online instruction, course content, or instructor’s approach. Departments and faculty members would be able to devote similar scrutiny if it turned out that students with traditionally accepted forms of prerequisites were the ones at a disadvantage. Similarly, in this article we have suggested an added value when research suggests that target course discipline differences exist in the relative effectiveness of two prerequisite course formats. Even when no statistically significant differences are detected, institutions can still make decisions based on the information. For example, since it appears that online students are generally not at a disadvantage when it comes to more advanced study, departments that have been reticent about developing courses may choose to begin to offer them.

In sum, we have described a new, parsimonious model that investigators interested in distance education can use to ask meaningful questions about the relative quality of distance education courses. The approach we suggest removes the emphasis from student-level data and places it squarely on the course itself. We have demonstrated some of the value that such an approach can have and we have suggested other potential uses. Further, we are optimistic that if institutions pool their data, in the future a more complete understanding of distance education can emerge.


(1.) The authors gratefully acknowledge Dr. Richard M. Summerville, Professor of Mathematics at Christopher Newport University, for his statistical recommendations.

(2.) The tabulated frequencies represent those reported in Dominguez & Ridley (1999), as corrected in the online version of the journal, plus new data augmenting the initial study.


Dominguez, P. S., & Ridley, D.R. 1999. Reassessing the assessment of distance education: A proposal for an alternative framework in higher education. T.H.E. Journal, 27(2), 70-76. Quanty, M. B., Dixon, R. W., & Ridley, D.

R. 1998. Community college strategies: A new paradigm for evaluating transfer success. Assessment Update, 10(2).

Dr. Paula Szulc Dominguez, Director of Research and Evaluation, Hezel Associates. Dr. Dennis R. Ridley, Director of Institutional Research and Planning, Virginia Wesleyan College.

Correspondence concerning this article should be addressed to Dr. Dennis R. Ridley, Director of Institutional Research and Planning, Virginia Wesleyan College, Norfolk/Virginia Beach, VA 23452. E-mail:

COPYRIGHT 2001 George Uhlig Publisher

COPYRIGHT 2001 Gale Group