I have not used this rubric to evaluate a course. I have read about this rubric, and no research into rubrics for course development would be complete without looking at this one. The Illinois Online Network and the Illinois Virtual Campus developed their own quality online course rubric and evaluation system. Their goal “is to help colleges and universities to improve accountability of their online courses.”
What problems does this rubric solve?
According to their website, the main objectives of this project were to:
- create a useful evaluation tool (rubric) that can help faculty develop quality online courses
- identify “best practices” in online courses
- recognize faculty, programs, and institutions that are creating quality online courses
How was it created?
Again, according to the site: “The first step of our process was to brainstorm criteria for what makes a quality online course. As we came up with our ideas, we noticed several patterns and overlapping information. Therefore, we decided to chunk the information in to six categories.” I appreciate the fact that this was an in-house project, but I would like to know more about any research that went into this project besides “brainstorming.” I am sure that there were faculty of experience and expertise that contributed to this project, but knowing how they identified “best practices” is important. There are a lot of things that teachers used to do as “best practices” that we just don’t do any more (e.g. physical pain as a learning motivator).
What does the rubric assess?
After the brainstorming process, each category was then broken down into topics and finally individual criterion which included these six categories:
- Instructional Design
The instructional design section is fairly basic. It defines learning as a “transfer of knowledge and skills” which is a bit dated.
- Communication, Interaction, and Collaboration
I am glad there is communication, interaction, and collaboration in this rubric. They could go a little further and think about community building beyond group work.
- Student Evaluation & Assessment
As always, assessments are aligned with learning objectives. There are items in this category that should be under “syllabus boilerplate” like a statement of FERPA rules.
- Learner Support and Resources
This is a critical area as those in the commercial MOOC world found out. Supporting students and building that support into the curriculum is crucial to student success in online learning. The rubric assesses a bare minimum of support.
- Web Design
With the concerns about “scrolling,” “pop-up windows” and “frames,” the rubrics gray 1998 roots are showing. This needs a refresh with some Universal Design for Learning.
- Course Evaluation
“Opportunities for learner feedback throughout the course on issues surrounding the course’s physical structure (e.g. spelling mistakes, navigation, dead links, etc.) are provided.” – likewise for Instruction and Content.
Weaknesses of the Rubric
The one that stands out as a surprise is the lack of discussion about accessibility and universal design. I would like to see this rubric refreshed and backed up with research. There is a page on their site that has links marked “research” and “resources” but that seems to be an unimplemented plan. The pedagogy around the rubric needs updating as well.
Strengths of the Rubric
There are some things to like here though. First, it is used for course development, not merely a “fix-it” tool. It is also used for recognizing exemplary courses and faculty. This is one of the most important uses of rubrics on a college campus: the discussion of rubrics, an adoption plan, and implementation can bring faculty together to develop quality online programs from within the culture of the campus. It becomes another hub for connecting faculty with one another. Finally, it is openly licensed which means other campuses get to benefit from their work as they customize a process for the needs of their own teaching and learning communities.