In my look at online course rubrics, I want to include Central Michigan University’s Quality Assurance Checklist from their Center for Instructional Design (CID) because it is typical of what most colleges who do have a rubric usually have. There is nothing wrong with keeping it simple. The CID staff members “research the latest pedagogical and technological information relating to online and hybrid classes in order to provide you with sound instructional technology support. We are a staff of instructional designers and training experts who provide assistance with the development of online and hybrid courses and course elements. We also offer training on the latest instructional technologies and online teaching and course development workshops.”
The rubric implies a basic, simple definition of online learning. Much of the rubric’s concerns are around meeting institutional standards rather than address what does or does not work in an online course. For instance, the first item in the rubric asks if the course “adheres to the Master Course Syllabus.”
What does it assess?
The checklist covers the following five areas:
- Course Structure
- Content Organization & Usability
- Instructor Presence & Learning Community
with a sixth area for “Additional Comments.” There are 42 items in the rubric. I am not sure why issues like “appropriate technologies and methods are used to support course activities/assignments” are in the “Course Structure” area and not in a separate area discussing technology. This is what is unfair about my evaluation of online course rubrics, each item in the rubric has a history either from the checklist the institution borrowed it from or from the institution itself. When you see an item in a teacher’s syllabus that says that poisonous plants are not allowed in the classroom, you know that there-in lies a tale.
What are the weaknesses of the rubric?
Right off the bat, for a rubric that is focused on quality assurance, there is no discussion of accessibility. There is one item that says “transcriptions are provided on PowerPoint narrated lectures and on course intro audio/videos.” But there is no other attention to accessibility. How about alt tags for images? I understand that a rubric cannot solve all of those issues but a rubric is a good place to get that conversation started with faculty. And the rubric does not tell the whole story of the institution. There could be many opportunities elsewhere for faculty to learn how to implement accessible media in their courses. The CID instructional designers appear to be available to consult on accessibility.
What are the strengths of the rubric?
This is one form in a set of forms, if you look at the page, there is a pull-down menu that leads to a “Peer Review Checklist” and one for an “Online Course Revision” form. I am hoping that all of these forms together are part of a larger professional development plan that would include faculty workshops on defining terms in the rubric. I like the attention paid to things like item 15 which shows the students how to get help. Communicating how to get help and what instructor expectations are of students are essential in online classes because those expectations are different than in face-to-face courses.
I appreciate the focus on instructor presence, building community, and the encouragement for having guest speakers. This checklist would be a good starting point to help a department or institution explore problems and issues in online learning. For a rubric to be this simple, there would have to be a lot of agreement (and experiential homogeneity) as to what constituted a quality online experience for students.