Course Rubrics: OCEP

Logo for the Monterey Institute for Technology and EducationSome of the rubrics I am examining are a bit of a time warp. The Online Course Evaluation Project (OCEP) from the Monterey Institute for Technology and Education, for instance, is from 2005. I think it is important to look back at these documents to see how our understanding of online teaching and learning has evolved, and to help us decide what is important to us as educators in elearning. Our definitions of what makes a successful online experience has changed over the years (hopefully!) based on experience and research, but by looking at these rubrics, we have an opportunity to make decisions about what we would like to see happen in online education for the next decade. Each rubric allows us to ask what problems the institution were trying to solve. I am interested, as I think about a meta-rubric for course evaluation rubrics, in what the original purpose of the rubric was. For instance, the OCEP was meant to evaluate “existing online courses in higher education.” This is a different project than building a template for online courses or educating faculty on how online courses succeed.

What problems does this rubric solve?
Also, according to the rubric, the goal of OCEP is to provide the academic community with a criteria-based evaluation tool to assess and compare the quality of online courses.”

It goes on to say that “the focus of the evaluation is on the presentation of the content and the pedagogical aspects of online courses, yet OCEP also considers the instructional and communication methods so vital in a successful online learning experience.” Existing online courses are identified and measured against a set of objective evaluation categories.

How was it created?
They also discuss the research that went into the rubric although I would hope that the research was shared at one point: “these criteria were developed through extensive research and review of instructional design guidelines from nationally recognized course developers, best practices from leading online consultants, and from numerous academic course evaluations.”

How does it work?
I like this part of the rubric. It acknowledges the fact that courses are created by a community and not just a single instructor or subject matter expert. “OCEP employs a team approach in evaluating courses. Subject matter experts evaluate the scholarship, scope, and instructional design aspects of courses, while online multimedia professionals are used to consider the course production values. Technical consultants are employed to establish and evaluate the interoperability and sustainability of the courses.” In other words, it acknowledges the local community and culture that is creating the courses.

“The ongoing results of the OCEP study are available publicly in a web-enabled comparison tool developed in partnership with the EduTools project from WCET OCEP is a project of the Monterey Institute for Technology and Education (MITE), an educational non-profit organization committed to helping meet society’s need for access to effective, high quality educational opportunities in an era of rapid economic, social, and personal change. The Monterey Institute for Technology and Education was founded as a 501(c)3 non-profit.”

What does it assess?
I recognize the pattern of assessment here as one that is similar to ones that would come out of courseware development from a textbook publisher. The assessment seems to follow the production model of their course development. The assessed categories are:

  1. Course Developer and Distribution Models
    This is exactly how publishers think of mateials first – who is it for, how will it be delivered and who owns it: “This section notes the type and status of the course developer, the major methods for distribution of the courses to the organization’s constituents, and any licensing models employed by the developer.”
  2. Scope and Scholarship
    This section of the rubric is handled by the subject matter expert – usually the instructor: “This section focuses on the intended audience for the course,
    as well as the breadth and depth of the content presentation.”
  3. User Interface
    The course is then handed off to an instructional designer whose review includes the evaluation categories “that address the instructional design principles used in the access, navigation and display of the course that allow the user to interact with the content.”
  4. Course Features and Media Values
    Although it is not clear exactly how these “values” are measured, this section addresses the types of media used to convey the course content and to demonstrate how the user interacts with the content presentation.” So far, this is a huge project for an evaluator, but the rubric goes on to look” at the effectiveness and relevance of the content presentation, the level of engagement for the user, and the instructional design value of the multimedia content.”
  5. Assessments and Support Materials
    Much like the materials provided by a text book centric course, this section “addresses the availability and types of assessments (complete tests or other activities used to assess student comprehension) and support materials that accompany the course as a resource for the instructor and the student.”
  6. Communication Tools and Interaction
    This section attempts to “addresses the course management environment, how communications take place between instructors, students and their peers, and what course content exists to effectively utilize the communication tools provided by the CMS.” By CMS our authors are referring to the learning management system and make the assumption that “since most course management environments include the functionality noted in the evaluation categories, the emphasis for this review is placed on the course content designed to drive the use of the communication tools (threaded discussion, chat, group or bulletin board activities, etc.).”
  7. Technology Requirements and Interoperability
    This section “addresses the technology and distribution issues related to the course, as well as the system and software requirements, operating systems, servers, browsers and applications or plug-ins.” The section also includes an evaluation of the accessibility, a copyright review, and “interoperability” standards (e.g. SCORM) applied to the course and course content.
  8. Developer Comments
    This is the one section where outcomes and student support is explicitly stated. This section gives the course developer “an opportunity to highlight unique features of the course, provide a summary of course outcomes per available information, and clarify other course resources.”

What are the weaknesses?
It seems to be more focused on a production model rather than a course evaluation tool. I am not sure how this should be used to evaluate “already existing courses.” The model is seems to be a one-way dump of information to the students with some acknowledgement of the importance of interactivity. There is not enough information in the rubric itself to say how course reviewers would go about performing the review. Here the annotations of a rubric like the Quality Matters are very helpful.

What are its strengths?
This rubric recognizes that it takes a team to create online learning. That is an important part of developing customized rubrics for an institution – it is an opportunity to bring together the resources of a campus or institution to include all relevant resources for a successful online experience. If a program, department, or institution needed to develop online instruction and needed a production model, this would be a good start. But only with the caveat that building one of these courses is not the same thing as instruction. Additionally, the rubric acknowledges the importance of interactivity as a success factor in online teaching and learning.

Lets be clear – the OCEP is 12 years old, there have been a number of advances in our understanding of online teaching and learning, but understanding where we have come from is an important part of figuring out where we are going. It is always useful to look at how other institutions have evaluated online classes.

This entry was posted in assessment and tagged , , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.