Course Rubrics: CSU’s QOLT

I have used the California State University’s Quality Online Learning and Teaching (QOLT) rubric to evaluate courses as well as its ancestor, the CSU Chico rubric. This is the two-page version of the QOLT rubric. I have seen this rubric used very effectively for course design and for the review of existing courses. It is not, however, without its issues. It is well worth exploring their website to see how quality assurance for online courses is being implemented on a state-wide level.

What Problems Does This Solve?
According to the CSU website: “The Quality Learning and Teaching (QLT) program was developed to assist faculty, faculty developers, and instructional designers to more effectively design and deliver online, blended, and flipped courses. The QLT evaluation instrument, containing 9 sections with 53 objectives, provides guidance and feedback to instructors. In addition, QLT includes an optional section on Mobile Platform Readiness (4 objectives). Each of the sections has a built-in rubric that provides feedback based on the instructor’s formative score.”

This is a good use of these rubrics – faculty new to online learning can use these rubrics to get an idea about how online classes work. Faculty who are new to instructional design can get an idea of the elements required for a successful online course.

How does it work?
The rubric comes with training at all levels much like the Quality Matters rubric. Much of the training centers around making sense of some of the questions which could be streamlined. There is training available for faculty, peer reviewers, and staff. There are classes that the CSU runs that covers:

  • Introduction to Teaching Online Using the QLT Instrument
  • Reviewing Courses Using the QLT Instrument
  • Applying the QM Rubric
  • Improving Your Online Course

This gives you an idea about how the rubric is used in the CSUs – teacher training, reviewing courses, and improving courses.

What does it assess?
The rubric has nine sections with 53 objectives:

1. Course Overview and Introduction (8 objectives)
2. Assessment and Evaluation of Student Learning (6 objectives)
3. Instructional Materials and Resources Utilized (6 objectives)
4. Students Interaction and Community (7 objectives)
5. Facilitation and Instruction (8 objectives)
6. Technology for Teaching and Learning (5 objectives)
7. Learner Support and Resources (4 objectives)
8. Accessibility and Universal Design (7 objectives)
9. Course Summary and Wrap-up (3 objectives)

What are its weaknesses?
I would like to see how the previous research was integrated into this rubric. And if previous research was used, how successful were the courses (or instructors) in using the previous rubrics this work was based on? Did they actually improve teaching and learning? Why are some of the rubrics used in the past not being used now? And as far as I know, little research is being done to compare things like the completion and success rates of courses that have undergone a review and those that have not. There is a report that is supposed to come out this Fall that is called “Quality Assurance Impact Research” – I look forward to seeing that.  But essentially, the weaknesses of this rubric are weaknesses that are common to most.

Some of the items in the rubric are not measurable by an outside or peer reviewer. For instance 4.7 asks “the course learning activities help students understand fundamental concepts, and build skills useful outside of the course.” This would require detailed specialized knowledge on the part of the reviewer to answer in any useful way. Items like “the instructor helped to focus discussion on relevant issues” and 5.1 “the instructor was helpful in identifying areas of agreement and disagreement on course topics” could only be answered AFTER a class has been taught. 6.3 asks “technological tools and resources used in the course enable student engagement and active learning” yet the evidence required is merely the presence of collaborative online tools with no clear way to measure their effective use. Owning a Spanish dictionary does not make me a Spanish speaker.

Some of the items require specialized training or access to experts for instance 8.6 says “all tools used within learning management system or that are third-party are accessible and assistive technology ready.” I have worked for too many colleges that rely on vendors to make those decisions. Real expertise is needed to see if the code in an online tool is accessible or accessible with a screen reader. I am not sure how a checkbox on a rubric solves that problem.

It is long and detailed. It takes time to actually use it. In the end, this is probably a good thing. I see this more as a training tool than a course checklist.

What are its strengths?
One of its strengths is its flexibility. The authors added Mobile Readiness at a later date as they began to find that more and more students access their course materials from their phones. The rubric is openly licensed and documentation aligning it with the Quality Matters rubric is also available. Also, it weaknesses are also its strengths. The length of the rubric points out that it takes a community to create a successful online course. Faculty, instructional designers, librarians, accessibility specialists, and IT departments all have a role to play in the process. The question then becomes: how much of this belongs in an online course development process and how much belongs in an online course rubric?

This entry was posted in assessment and tagged , , , , , , , . Bookmark the permalink.

One Response to Course Rubrics: CSU’s QOLT

  1. James D. Cain says:

    Why am I not a node in your network? I admit that I do not know what a node in a network is, but, after all am I not nevertheless knowledgeable enough to become .a node in a network. Awaiting your node nod. (Just kidding…)😜😃😁

Leave a Reply to James D. Cain Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.