This is the first of a seven part series examining how different institutions evaluate online courses. Each of the seven blog postings after this will feature a different rubric from a different institution for evaluating online courses. I am specifically interested in what institutions have created, adapted, or purchased and why. Each posting will attempt to look at:
- What problems does this rubric solve?
- How is this solution tied into the campus community?
- What is unique to this campus culture that is addressed by this rubric?
Some of the issues I have had as an instructor, instructional designer, and administrator is that the evaluation of online courses tends to be an afterthought – something you do only after a course is created and built. It is important that the specifications for online courses are communicated to faculty before the design process begins. This assumes that there is a “design process” and that is not always the case. For the use of rubrics for online course evaluation to work effectively, they must be a part of the course development process. This is not as intuitive as it sounds because different departments in a college may have historical processes for getting online courses created that differ from department to department.
All of this begs the question: why use a rubric for evaluating courses at all? We don’t have rubrics for evaluating face-to-face courses, why do we need them for online courses? First of all, instructors and students have had a life-long experience of being in face-to-face classrooms and there are accepted standards as to what constitutes a face-to-face classroom and how it should work. We do not have that yet in online learning despite the decades colleges have been offering online classes. With some notable exceptions, early online courses were concerned with the transfer of information which is not the same as teaching and learning.
Many instructors and institutions were not clear on how learning occurs online. Since the 90s, there has been a lot of research on how people teach and learn online successfully. Interestingly enough, that research follows the same lines of research around how to teach effectively in face-to-face courses. I am thinking here of Chickering and Ehrmann’s online follow-ups to Chickering and Gamson’s research in face-to-face undergraduate success. The rubrics for evaluating online courses can be a method for communicating the expectations or best practices of online learning and teaching for new teachers or for teachers who have not had a lot of experience online.
All of that said, these rubrics are not a teacher evaluation tool. One can build a course to the specifications of a rubric and still have an unsuccessful course. Its implementation and success will always rest in the teacher’s hands.
And not all rubrics are created equal. Some of them are very basic, some of them try to evaluate things that are difficult to account for in a rubric: cultural inclusivity, for instance, is something that may best be accounted for by formal peer and student evaluations. Not all of the rubrics stress enough of the things that we now know are critical to online course success such as community, or the latest research around Connectivism and Open Pedagogy. Despite the research into successful online learning, there can still be great disparities in universities between their online courses and face-to-face. One cause of this is because despite the widespread implementation of various solutions, each institution is unique – each institution is embedded in a community with different needs. The question I hope to answer this summer, is how do we account for this uniqueness while attempting to apply a solution from outside the institution? What we are really doing when we adopt a course evaluation rubric? What are the concerns or opportunities when adopting a rubric?
My goal at the end of these postings will be to present a meta-rubric for evaluating course evaluation rubrics (something along the lines of Dr. Mollinix at Monmouth University) and to develop an adoption plan for instructors, departments, or institutions that will address the needs of the learning community, the professional development of the instructors, and the goals of the departments or institutions.
The rubrics I am evaluating include:
- OEI: Course Design Rubric for the Online Education Initiative, (California Community College’s Chancellor’s Office)
- Quality Matters
- Quality Online Course Initiative Rubric and Checklist (Illinois Online Network)
- Online Course Evaluation Project (OCEP), (Monterey Institute for Technology and Education)
- Quality Assurance Checklist (Central Michigan University)
- Online Course Assessment Tool (OCAT) and Peer Assessment Process, (Western Carolina University).
- Quality Online Learning and Teaching, (California State University)
Let me know if you would like me to look at others or to let me know about your experience with online course evaluation rubrics (the good, the bad, and the ugly). I would love to hear from you.