Course Rubrics: OCAT and Peer Assessment Process

Western Carolina University logoThe Coulter Faculty Center eLearning Faculty Fellows at Western Carolina University developed the “Online Course Assessment Tool and Peer Assessment Process” or “OCAT.” This is essentially a checklist style rubric designed to provide peer feedback on the design of their online courses.

What Problem Did This Solve?
According to the notes on the rubric, they developed this tool and confidential peer assessment process to provide faculty with constructive peer feedback on the design and instruction of online courses. Their expected benefits were:

  • Constructive feedback regarding teaching effectiveness
  • Instructional improvement
  • Faculty development
  • Opportunities for peer support

What Does The Rubric Assess?
The rubric has seven sections. The first five sections assess course design and teaching concerns. The last two sections provide spaces for summary narratives from a peer reviewer and an instructor response.

The first five sections assessing the course include:

  1. Course Overview & Organization
  2. Learner Objectives & Competencies
  3. Resources & Materials
  4. Learner Interaction
  5. Learner Assessment

There are some interesting ideas in this rubric that I think are important to student success in online courses that you usually do not see covered in a rubric. For instance, under “Resources & Materials” they include “provides opportunities for students to contribute to course resources.” This is a powerful learning opportunity for students in online courses. Of course, I would like to see something about open education resources or openly licensed materials here (open textbooks or Creative Commons licenses).

Weaknesses of the Rubric
The implied definition of an online course in this rubric relies too much on content delivery rather than knowledge creation or community building. To be fair, the rubric includes statements like “fosters interaction among constituencies inside and outside the course as appropriate (e.g. student-student, student-instructor, and with external persons or agencies)” but “fostering” is not the same as integrating that interaction into the curriculum. There is a recognition of different learning modalities (“learning styles”) in the rubric but no attention to accessibility. They must address this elsewhere which, given the expertise needed to really address accessibility, is understandable.

Strengths of the Rubric
This is one of those rubrics where the purpose and the process is as important as what it assesses. Some of the issues around using rubrics for assessing online courses include Academic Freedom and possible union issues around evaluating faculty. This rubric is focused on peer assessment. A note in the rubric says “the peer assessment instrument itself will also be available for faculty use as a self-assessment faculty development tool” which, I think, is the most valuable use of an assessment process. The process includes meeting with a trained, peer reviewer. The other strength of this rubric is that each section has a space for writing down “instructional item(s) emerging from peer discussion not included in the list above.” This helps address the individuality of the teacher and different teaching styles. Rubrics should not be used to create cookie-cutter courses. It is obvious that the team that put this rubric together did some research and brought in their local academic community to develop this process.

Posted in assessment | Tagged , , , , | Leave a comment

Course Rubrics: CMU’s Quality Assurance Checklist

Central Michigan UniversityIn my look at online course rubrics, I want to include Central Michigan University’s Quality Assurance Checklist from their Center for Instructional Design (CID) because it is typical of what most colleges who do have a rubric usually have. There is nothing wrong with keeping it simple.  The CID staff members “research the latest pedagogical and technological information relating to online and hybrid classes in order to provide you with sound instructional technology support. We are a staff of instructional designers and training experts who provide assistance with the development of online and hybrid courses and course elements. We also offer training on the latest instructional technologies and online teaching and course development workshops.”

The rubric implies a basic, simple definition of online learning. Much of the rubric’s concerns are around meeting institutional standards rather than address what does or does not work in an online course. For instance, the first item in the rubric asks if the course “adheres to the Master Course Syllabus.”

What does it assess?
The checklist covers the following five areas:

  1. Course Structure
  2. Syllabus
  3. Content Organization & Usability
  4. Instructor Presence & Learning Community
  5. Assessment

with  a sixth area for “Additional Comments.” There are 42 items in the rubric. I am not sure why issues like “appropriate technologies and methods are used to support course activities/assignments” are in the “Course Structure” area and not in a separate area discussing technology. This is what is unfair about my evaluation of online course rubrics, each item in the rubric has a history either from the checklist the institution borrowed it from or from the institution itself. When you see an item in a teacher’s syllabus that says that poisonous plants are not allowed in the classroom, you know that there-in lies a tale.

What are the weaknesses of the rubric?
Right off the bat, for a rubric that is focused on quality assurance, there is no discussion of accessibility. There is one item that says “transcriptions are provided on PowerPoint narrated lectures and on course intro audio/videos.” But there is no other attention to accessibility. How about alt tags for images? I understand that a rubric cannot solve all of those issues but a rubric is a good place to get that conversation started with faculty. And the rubric does not tell the whole story of the institution. There could be many opportunities elsewhere for faculty to learn how to implement accessible media in their courses. The CID instructional designers appear to be available to consult on accessibility.

What are the strengths of the rubric?
This is one form in a set of forms, if you look at the page, there is a pull-down menu that leads to a “Peer Review Checklist” and one for an “Online Course Revision” form. I am hoping that all of these forms together are part of a larger professional development plan that would include faculty workshops on defining terms in the rubric. I like the attention paid to things like item 15 which shows the students how to get help. Communicating how to get help and what instructor expectations are of students are essential in online classes because those expectations are different than in face-to-face courses.

I appreciate the focus on instructor presence, building community, and the encouragement for having guest speakers. This checklist would be a good starting point to help a department or institution explore problems and issues in online learning. For a rubric to be this simple, there would have to be a lot of agreement (and experiential homogeneity) as to what constituted a quality online experience for students.

Posted in assessment | Tagged , , , , , , , | Leave a comment

Seeing Ourselves Through Technology

Book cover - Seeing Ourselves Through TechnologyI just finished reading Jill Walker-Rettberg’s 2014 book Seeing Ourselves Through Technology: How We Use Selfies, Blogs and Wearable Devices to See and Shape Ourselves. There are so many things I love about this book, but because I am so passionate about open education resources and Creative Commons licensing, the first to me is that it is freely available under a Creative Commons license from a major publisher.

Walker-Rettberg has been an important part of my thinking on technology for some years now. She brings in to her work scholarship, lucid connections to the humanities, and a personal dimension missing from similar writings on technology. Her work on blogging and hypertext theory are must-reads and because her work is grounded in the humanities, it is always relevant. There is a great intersection in her work of history, literature and philosophy.

The six chapters cover visual self-portraits, filters and filtering, selfies, automated diaries, the quantified self, and a final chapter about privacy and surveillance. Each one of these chapters explores technologies that create a lens through which the self is presented. It is refreshing to read scholarship on selfies that does not just write off an entire generation as self-absorbed – there is so much else going on here. Just as drawing, physical photography and previous art forms have led others to explore and experiment with the presentation of the self, so too with these new technologies. According to an article in the Independent, “For the first time ever there are more gadgets in the world than there are people, including a growing number that only communicate with other machines, according to data from digital analysts at GSMA Intelligence. The number of active mobile devices and human beings crossed over somewhere around the 7.19 billion mark.” To think that we can take the old paradigms and apply them to this brave new world is the height of folly. We need this kind of scholarship to help us sort out where we have been and where we are going.

It is a short book covering a broad subject. In her historical run-up to where we are now, I would have loved to have seen something about the importance of letters and epistolary novels and more about the Twitter-esque writings of Samuel Pepys – who now has a Twitter account. Walker-Rettberg is a good antidote to the classist fear of technology and modern culture found in writers like Sven Birkirts who believes that you are not really reading a book unless you are reading it in the same format and cultural milieu as it was written. It is refreshing to read an examination of technological culture written by someone who is not afraid of it. There are echoes here of Barthes and Merleau-Ponty in its focus on the expressive nature of human beings through writing, image, gesture and lived behavior.

Writing like this is critical. In my own field, education technology, there is a common lack of a sense of history. The entrepreneurial culture is mired in the flash of newness. This is why commercial MOOCs, for instance, have been such a huge failure: folks like Elon Musk and Andrew Ng have no sense of where online teaching and learning has come from so they are stymied by the simplest of issues (e.g. online students need support) that were solved in the 90s.

One of the important points I took away from this book is her acknowledgement that technology does not create the image of ourselves but an image. By looking at the images of ourselves that technology provides, we have to bring into question the image of ourselves that we already hold and why we have them. The book discusses how we see ourselves through the lens of data and the last chapter brings into question how we let corporations track and manage that data. I highly recommend this book and look forward to her future work.

Posted in cyberstudies | Tagged , | Leave a comment

Betsy DeVos Cancels Solar Eclipse

I don’t know if you have gotten this letter yet. Seems a little strange to me but I guess that is where we are now: 

Department of Education LogoAugust 16, 2017

Dear Fellow Educator,

I am writing this letter to announce that the Department of Education is canceling next week’s so-called “solar eclipse.” This is not because others in the administration feel that this eclipse is a holdover from the failures of the Obama administration but because it is completely against the new direction that education is taking in America.

Not all states are getting the total solar eclipse so it is unfair that the Department of Education should support it. This is clearly a states’ rights issue and observation of the eclipse should be decided at the local level on a school district by school district basis.

Also, the eclipse is taking place at different times in different parts of the country. It is a total eclipse in one state, partial in another, and one state is observing at one time and other states at another. In other words, it is totally unpredictable. And why are the STEM disciplines fetishizing the solar eclipse? Why aren’t there any lunar eclipses to provide a balanced view? Experts tell me that during the eclipse the moon will move from right to left across the sun which is an obvious politicizing of what is supposedly an educational event.

There are STEM instructors who are using this eclipse in assignments. This makes the experience of the eclipse a learning artifact assessable in student portfolios which makes the eclipse a clear violation of the Family Educational Rights and Privacy Act of 1974 (FERPA).  In addition, the assignments present ADA concerns as the learning artifact is accessible solely as visual media.

A solar eclipse leaves many of our online students at a disadvantage. I talked to my IT people about the affordances, objectives, and outcomes of the eclipse as a learning object. They said that they were using the Canvas LMS and that plunging the world into darkness and lowering the temperature by 25 degrees was more of a Blackboard thing, but that there were some promising possibilities in Second Life and other simulators.

We are currently setting up a commission to examine this issue to address any eclipses that might occur in the future. This commission will propose a special voucher that school districts will be able to apply for on a case-by-case basis if they wish to participate in any possible subsequent eclipses.

Sincerely,

Betsy DeVos

United States Department of Education

Posted in education | Tagged , , | Leave a comment

Course Rubrics: OCEP

Logo for the Monterey Institute for Technology and EducationSome of the rubrics I am examining are a bit of a time warp. The Online Course Evaluation Project (OCEP) from the Monterey Institute for Technology and Education, for instance, is from 2005. I think it is important to look back at these documents to see how our understanding of online teaching and learning has evolved, and to help us decide what is important to us as educators in elearning. Our definitions of what makes a successful online experience has changed over the years (hopefully!) based on experience and research, but by looking at these rubrics, we have an opportunity to make decisions about what we would like to see happen in online education for the next decade. Each rubric allows us to ask what problems the institution were trying to solve. I am interested, as I think about a meta-rubric for course evaluation rubrics, in what the original purpose of the rubric was. For instance, the OCEP was meant to evaluate “existing online courses in higher education.” This is a different project than building a template for online courses or educating faculty on how online courses succeed.

What problems does this rubric solve?
Also, according to the rubric, the goal of OCEP is to provide the academic community with a criteria-based evaluation tool to assess and compare the quality of online courses.”

It goes on to say that “the focus of the evaluation is on the presentation of the content and the pedagogical aspects of online courses, yet OCEP also considers the instructional and communication methods so vital in a successful online learning experience.” Existing online courses are identified and measured against a set of objective evaluation categories.

How was it created?
They also discuss the research that went into the rubric although I would hope that the research was shared at one point: “these criteria were developed through extensive research and review of instructional design guidelines from nationally recognized course developers, best practices from leading online consultants, and from numerous academic course evaluations.”

How does it work?
I like this part of the rubric. It acknowledges the fact that courses are created by a community and not just a single instructor or subject matter expert. “OCEP employs a team approach in evaluating courses. Subject matter experts evaluate the scholarship, scope, and instructional design aspects of courses, while online multimedia professionals are used to consider the course production values. Technical consultants are employed to establish and evaluate the interoperability and sustainability of the courses.” In other words, it acknowledges the local community and culture that is creating the courses.

“The ongoing results of the OCEP study are available publicly in a web-enabled comparison tool developed in partnership with the EduTools project from WCET OCEP is a project of the Monterey Institute for Technology and Education (MITE), an educational non-profit organization committed to helping meet society’s need for access to effective, high quality educational opportunities in an era of rapid economic, social, and personal change. The Monterey Institute for Technology and Education was founded as a 501(c)3 non-profit.”

What does it assess?
I recognize the pattern of assessment here as one that is similar to ones that would come out of courseware development from a textbook publisher. The assessment seems to follow the production model of their course development. The assessed categories are:

  1. Course Developer and Distribution Models
    This is exactly how publishers think of mateials first – who is it for, how will it be delivered and who owns it: “This section notes the type and status of the course developer, the major methods for distribution of the courses to the organization’s constituents, and any licensing models employed by the developer.”
  2. Scope and Scholarship
    This section of the rubric is handled by the subject matter expert – usually the instructor: “This section focuses on the intended audience for the course,
    as well as the breadth and depth of the content presentation.”
  3. User Interface
    The course is then handed off to an instructional designer whose review includes the evaluation categories “that address the instructional design principles used in the access, navigation and display of the course that allow the user to interact with the content.”
  4. Course Features and Media Values
    Although it is not clear exactly how these “values” are measured, this section addresses the types of media used to convey the course content and to demonstrate how the user interacts with the content presentation.” So far, this is a huge project for an evaluator, but the rubric goes on to look” at the effectiveness and relevance of the content presentation, the level of engagement for the user, and the instructional design value of the multimedia content.”
  5. Assessments and Support Materials
    Much like the materials provided by a text book centric course, this section “addresses the availability and types of assessments (complete tests or other activities used to assess student comprehension) and support materials that accompany the course as a resource for the instructor and the student.”
  6. Communication Tools and Interaction
    This section attempts to “addresses the course management environment, how communications take place between instructors, students and their peers, and what course content exists to effectively utilize the communication tools provided by the CMS.” By CMS our authors are referring to the learning management system and make the assumption that “since most course management environments include the functionality noted in the evaluation categories, the emphasis for this review is placed on the course content designed to drive the use of the communication tools (threaded discussion, chat, group or bulletin board activities, etc.).”
  7. Technology Requirements and Interoperability
    This section “addresses the technology and distribution issues related to the course, as well as the system and software requirements, operating systems, servers, browsers and applications or plug-ins.” The section also includes an evaluation of the accessibility, a copyright review, and “interoperability” standards (e.g. SCORM) applied to the course and course content.
  8. Developer Comments
    This is the one section where outcomes and student support is explicitly stated. This section gives the course developer “an opportunity to highlight unique features of the course, provide a summary of course outcomes per available information, and clarify other course resources.”

What are the weaknesses?
It seems to be more focused on a production model rather than a course evaluation tool. I am not sure how this should be used to evaluate “already existing courses.” The model is seems to be a one-way dump of information to the students with some acknowledgement of the importance of interactivity. There is not enough information in the rubric itself to say how course reviewers would go about performing the review. Here the annotations of a rubric like the Quality Matters are very helpful.

What are its strengths?
This rubric recognizes that it takes a team to create online learning. That is an important part of developing customized rubrics for an institution – it is an opportunity to bring together the resources of a campus or institution to include all relevant resources for a successful online experience. If a program, department, or institution needed to develop online instruction and needed a production model, this would be a good start. But only with the caveat that building one of these courses is not the same thing as instruction. Additionally, the rubric acknowledges the importance of interactivity as a success factor in online teaching and learning.

Lets be clear – the OCEP is 12 years old, there have been a number of advances in our understanding of online teaching and learning, but understanding where we have come from is an important part of figuring out where we are going. It is always useful to look at how other institutions have evaluated online classes.

Posted in assessment | Tagged , , , , , | Leave a comment

Course Rubrics: Illinois’ Quality Online Course Initiative

Quality Online Course Initiative logo. I have not used this rubric to evaluate a course. I have read about this rubric, and no research into rubrics for course development would be complete without looking at this one. The Illinois Online Network and the Illinois Virtual Campus developed their own quality online course rubric and evaluation system. Their goal “is to help colleges and universities to improve accountability of their online courses.”

What problems does this rubric solve?
According to their website, the main objectives of this project were to:

  • create a useful evaluation tool (rubric) that can help faculty develop quality online courses
  • identify “best practices” in online courses
  • recognize faculty, programs, and institutions that are creating quality online courses

How was it created?
Again, according to the site: “The first step of our process was to brainstorm criteria for what makes a quality online course. As we came up with our ideas, we noticed several patterns and overlapping information. Therefore, we decided to chunk the information in to six categories.” I appreciate the fact that this was an in-house project, but I would like to know more about any research that went into this project besides “brainstorming.” I am sure that there were faculty of experience and expertise that contributed to this project, but knowing how they identified “best practices” is important. There are a lot of things that teachers used to do as “best practices” that we just don’t do any more (e.g. physical pain as a learning motivator).

What does the rubric assess?
After the brainstorming process, each category was then broken down into topics and finally individual criterion which included these six categories:

  1. Instructional Design
    The instructional design section is fairly basic. It defines learning as a “transfer of knowledge and skills” which is a bit dated.
  2. Communication, Interaction, and Collaboration
    I am glad there is communication, interaction, and collaboration in this rubric. They could go a little further and think about community building beyond group work.
  3. Student Evaluation & Assessment
    As always, assessments are aligned with learning objectives. There are items in this category that should be under “syllabus boilerplate” like a statement of FERPA rules.
  4. Learner Support and Resources
    This is a critical area as those in the commercial MOOC world found out. Supporting students and building that support into the curriculum is crucial to student success in online learning. The rubric assesses a bare minimum of support.
  5. Web Design
    With the concerns about “scrolling,” “pop-up windows” and “frames,” the rubrics gray 1998 roots are showing. This needs a refresh with some Universal Design for Learning.
  6. Course Evaluation
    “Opportunities for learner feedback throughout the course on issues surrounding the course’s physical structure (e.g. spelling mistakes, navigation, dead links, etc.) are provided.” – likewise for Instruction and Content.

Weaknesses of the Rubric
The one that stands out as a surprise is the lack of discussion about accessibility and universal design. I would like to see this rubric refreshed and backed up with research. There is a page on their site that has links marked “research” and “resources” but that seems to be an unimplemented plan. The pedagogy around the rubric needs updating as well.

Strengths of the Rubric
There are some things to like here though. First, it is used for course development, not merely a “fix-it” tool. It is also used for recognizing exemplary courses and faculty. This is one of the most important uses of rubrics on a college campus: the discussion of rubrics, an adoption plan, and implementation can bring faculty together to develop quality online programs from within the culture of the campus. It becomes another hub for connecting faculty with one another. Finally, it is openly licensed which means other campuses get to benefit from their work as they customize a process for the needs of their own teaching and learning communities.

Posted in assessment | Tagged , , , , , , | Leave a comment

Course Rubrics: Quality Matters

Quality Matters logoI have a lot of experience with this rubric as a teacher, instructional designer, and administrator. I have worked with the Quality Matters rubric in some form or another since the early 2000’s. I was certified as a reviewer through the California State University Chancellor’s office and have reviewed numerous courses using the rubric. While there are many things to like about the rubric, I am a big fan of the early organization. One of the most useful things accomplished by Quality Matters is their research library which gathers research on the standards of the rubric such as interaction.

History
The QM rubric has certainly been an influential rubric. Before fiscal year 2007 or their grant, the rubrics and research were freely available to the public (after that, they introduced a business model of fees and subscriptions to support the research and training). According to their site, Quality Matters began with a small group of colleagues in the MarylandOnline, Inc. (MOL) consortium who were trying to solve a common problem among institutions: how do we measure and guarantee the quality of a course? In 2003, they applied for a Fund for the Improvement of Postsecondary Education (FIPSE) grant from the U.S. Department of Education to create the Quality Matters program.

What does the rubric assess?
The rubric addresses eight research-based standards for successful, quality online courses. They include:

  1. Course Overview Introduction
  2. Learning Objectives (Competencies)
  3. Assessment and Measurement
  4. Instructional Materials
  5. Course Activities and Learner Interaction
  6. Course Technology
  7. Learner Support
  8. Accessibility and Usability

Strengths of QM
I have seen this rubric in action. It has its short-comings but one of the most important things it does is to get an instructor to think about what happens in an online course and how courses, online, hybrid, and face-to-face, can be successful. The rubric gets faculty to think about how they communicate the goals and expectations of the course to their students (standards 1 and 2). Sometimes the disconnect in a course can be that an objective of the course isn’t actually assessed. I can personally attest that faculty with little to no online experience were able to use the QM rubric to build a successful online course. The tool helps faculty to understand issues in online learning that they may not have considered before such as student interaction and accessibility of course materials.

I have worked with a few instructors who, after going through the Quality Matters training, not only created an excellent online course, but applied the principles to improve their face-to-face classes as well. It is a great opportunity to get instructors thinking about teaching and learning in general rather than their subject in particular.

This points out one of the things this rubric does well: it is not so much the rubric itself but the process. Some of the best work on a college campus comes when you bring faculty together to review one another’s courses and talk about what worked for them and what didn’t. The faculty are often very creative in meeting to outcomes of the standards in ways that I am sure the QM folks could not imagine.

I like the caveat included in the rubric about accessibility (standard 8). It says “Meeting QM’s accessibility Standards does not guarantee or imply that specific country/federal/state/local accessibility regulations are met. Consult with an accessibility specialist to ensure that accessibility regulations are met.” The idea that a course can be declared accessible by checking off five items in a rubric is a bit ridiculous.

Weaknesses of QM
I think that the financial model is an issue. I know a lot of faculty who are enthusiastic about it because QM pays faculty to review courses from part of the fees they charge the institutions. I understand and value the recognition of faculty expertise and time, yet we typically don’t pay faculty extra money to observe and review face-to-face courses. For small, cash-strapped institutions subscriptions and fees are a burden.

The QM rubric seems weighted towards objectives and assessment and, in my experience, does not allow for adequate assessment of teacher-student or student-student interaction, community building, fostering student agency, or opportunities for students to guide the learning experience. I have written before on this blog about my conflicted relationship with course objectives. With the rise over the last 20  years of online learning, we are learning a lot about how students learn (e.g. Connectivism) and as much as I appreciate all of the research that has gone into this rubric, it needs a refresh. Standards 2-6 all have “learning objectives and competencies” as the first concern. While learning objectives have their uses, they are often more important to the institution’s ability to supposedly measure them than with the student learning. Sally Stewart did some interesting research around these questions (see the The Trouble with Learning Outcomes). And I get it, students need to know what will be covered in their Psych 100 class, but the objectives don’t teach the class. Often, the most important things that happen in a classroom are the things that can’t be measured, but we will not see a standard for “opportunities for transformative learning experiences” or creativity anytime soon. I am not above negotiating course objectives when students express a need or desire, or when we need to change the direction of a class due to current events – that kind of flexibility and openness is valuable to learners. But how do we measure it? Should we? Doesn’t that kind of teaching style contribute to the success of the students?

Additionally, if rubrics are going to be used to educate faculty on how online learning works then these rubrics need to measure the openness of a course: does the instructor use OER or open textbooks? Accessibility to a course should also account for economically accessible.

QM Rubric as a Facilitation Tool
Overall, the rubric is worth using as a starting point: the opportunities for faculty to work together on courses and programs is invaluable. It is an useful tool to introduce face-to-face faculty to online learning and to change courses that are one-way information dumps to something more engaging. Personally, I think this can happen without the cost and the tools institutions use should be open (Creative Commons license at least).

 

Posted in assessment | Tagged , , , , , , | Leave a comment

Course Rubrics: OEI

California Community College logo. The first rubric I will look at in this first of a seven-part series looking at course evaluation rubrics is the California Community College’s Online Education Initiative rubric.

Who wrote it?
This rubric was created by the California Community College Chancellor’s Office to provide a checklist demonstrating that the online courses offered through the Online Education Initiative align with state standards; the accreditor, ACCJC’s Guide to Evaluating DE; and national standards (iNACOL). After reviewing the rubric, I think they did a good job of meeting and even exceeding those standards.

There are a couple of things I liked about the OEI rubric right away – it recognizes established standards (expressed in the rubric) and its primary use is for self-evaluation. This rubric is supported by a useful website from the California Community Colleges Chancellor’s Office called “Online Course Design Standards.” This page contains an explanation of how the rubric came about and there are links at the bottom of the page that provide the latest two versions of the rubric and training materials. Most importantly, it is all offered to the public with an open license (Creative Commons, CC-by which is their most flexible licensing) making it free use, unlike other options, such as “Quality Matters.” In their discussion of the license they ask for feedback which I think means that they are aware of the organic nature of our current understanding of how online education works.

What problems does this solve?
According to the site, “The Rubric is intended to establish standards relating to course design, interaction and collaboration, assessment, learner support, and accessibility in order to ensure the provision of a high quality learning environment that promotes student success and conforms to existing regulations.”

This has the potential to solve a number of issues with online courses, including communicating to instructors who are new to online learning what some of the differences might be between online and face-to-face teaching. In my experience, an instructor’s first impulse is to recreate as much as possible what happens in the face-to-face class online which we know doesn’t work.

“In order for a course to be offered as a part of the Online Education Initiative (OEI) course exchange, it must meet established standards relating to course design, instruction, and accessibility that are intended to promote a quality learning environment that conforms to existing regulations. Prior to the submission of a course for OEI consideration, it is helpful for the faculty member to review these guidelines and conduct a self-evaluation. The outcome of this self-evaluation is a component of the OEI Course Application process.” I think that self-evaluation component of this process is absolutely critical no matter which rubric you are using. The main reason is that the individual instructor is often the only person who understands the degree of interactivity a course might have. Each instructor has a different way of implementing successful online teaching techniques. If there is evidence for the implementation of successful teaching, it often has to be located first by the instructor. Instructors can use the rubric not only to fix problems with a less successful course but also to identify why the course or aspects of the course are successful.

What is in it?
The rubric has five sections each with several elements that are assessed in the first three (A,B, and C) as either Incomplete, Aligned, or Additional Exemplary Elements (which recognizes “design choices that further enhance the student experience in the online learning environment.” Sections D and E are marked either Incomplete or Aligned because they address elements that are required by law to be present.

Section A: Content Presentation
“The 13 elements for quality course design in this section address how content is
organized and accessed in the course management system. Key elements include
course navigation, learning objectives, and access to student support information.”

Section B: Interaction
“The 8 elements in this section address instructor initiated and student initiated
communication. Key elements of quality course design covered in this section include
regular effective contact, student-to-student collaboration, and communication activities
that build a sense of community among online learners.”

Section C: Assessment
“The 8 elements in this section address the variety and effectiveness of assessments
within the course. Key elements include the alignment of objectives and assessments,
the clarity of instructions for completing assessments, and evidence of timely and regular
feedback.

Section D: Accessibility
“The 23 elements in this section are reviewed to determine if a student using assistive
technologies will be able to access course content as required by Section 508 of the
Rehabilitation Act of 1973 (also known as ‘508 Compliance’).”

Section E: Institutional Accessibility Concerns
“The 4 elements in this section cover accessibility of external tools and third-party
content. While the accessibility elements in Section D are primarily under the control of
faculty when developing a course, the elements in Section E may be outside the purview
of the instructor which would require additional consideration or intervention at the
institutional level.”

I would estimate that a 5-unit online English 101 course could be reviewed by an experienced course reviewer using this rubric in about 6 to 8 hours. It is thorough, addresses the standards it claims to, and recognizes some of the difficulties in evaluating accessibility. There are opportunities in online course evaluation to create a larger project outside of the corporate model such as Quality Matters. The Creative Commons license on the OEI rubric means that others can not only use the rubric, but can share research data such as any changes in student success and completion rates for courses that were reviewed versus those that were not. The OEI rubric is an important step in that direction.

I will be looking for more research into the successful use of this rubric.

If you have any additional information or experiences with this rubric, please feel free to post a comment below.

Posted in education | Tagged , , , , , , , | Leave a comment

Evaluating Online Courses: a prelude

Rubrics cube. This is the first of a seven part series examining how different institutions evaluate online courses. Each of the seven blog postings after this will feature a different rubric from a different institution for evaluating online courses.  I am specifically interested in what institutions have created, adapted, or purchased and why. Each posting will attempt to look at:

  • What problems does this rubric solve?
  • How is this solution tied into the campus community?
  • What is unique to this campus culture that is addressed by this rubric?

Some of the issues I have had as an instructor, instructional designer, and administrator is that the evaluation of online courses tends to be an afterthought – something you do only after a course is created and built. It is important that the specifications for online courses are communicated to faculty before the design process begins. This assumes that there is a “design process” and that is not always the case. For the use of rubrics for online course evaluation to work effectively, they must be a part of the course development process. This is not as intuitive as it sounds because different departments in a college may have historical processes for getting online courses created that differ from department to department.

All of this begs the question: why use a rubric for evaluating courses at all? We don’t have rubrics for evaluating face-to-face courses, why do we need them for online courses? First of all, instructors and students have had a life-long experience of being in face-to-face classrooms and there are accepted standards as to what constitutes a face-to-face classroom and how it should work. We do not have that yet in online learning despite the decades colleges have been offering online classes. With some notable exceptions, early online courses were concerned with the transfer of information which is not the same as teaching and learning.

Many instructors and institutions were not clear on how learning occurs online. Since the 90s, there has been a lot of research on how people teach and learn online successfully. Interestingly enough, that research follows the same lines of research around how to teach effectively in face-to-face courses. I am thinking here of Chickering and Ehrmann’s online follow-ups to Chickering and Gamson’s research in face-to-face undergraduate success. The rubrics for evaluating online courses can be a method for communicating the expectations or best practices of online learning and teaching for new teachers or for teachers who have not had a lot of experience online.

All of that said, these rubrics are not a teacher evaluation tool. One can build a course to the specifications of a rubric and still have an unsuccessful course. Its implementation and success will always rest in the teacher’s hands.

And not all rubrics are created equal. Some of them are very basic, some of them try to evaluate things that are difficult to account for in a rubric: cultural inclusivity, for instance, is something that may best be accounted for by formal peer and student evaluations. Not all of the rubrics stress enough of the things that we now know are critical to online course success such as community, or the latest research around Connectivism and Open Pedagogy. Despite the research into successful online learning, there can still be great disparities in universities between their online courses and face-to-face. One cause of this is because despite the widespread implementation of various solutions, each institution is unique – each institution is embedded in a community with different needs. The question I hope to answer this summer, is how do we account for this uniqueness while attempting to apply a solution from outside the institution? What we are really doing when we adopt a course evaluation rubric? What are the concerns or opportunities when adopting a rubric?

My goal at the end of these postings will be to present a meta-rubric for evaluating course evaluation rubrics (something along the lines of Dr. Mollinix at Monmouth University) and to develop an adoption plan for instructors, departments, or institutions that will address the needs of the learning community, the professional development of the instructors, and the goals of the departments or institutions.

The rubrics I am evaluating include:

  1. OEI: Course Design Rubric for the Online Education Initiative, (California Community College’s Chancellor’s Office)
  2. Quality Matters
  3. Quality Online Course Initiative Rubric and Checklist (Illinois Online Network)
  4. Online Course Evaluation Project (OCEP), (Monterey Institute for Technology and Education)
  5. Quality Assurance Checklist (Central Michigan University)
  6. Online Course Assessment Tool (OCAT) and Peer Assessment Process, (Western Carolina University).
  7. Quality Online Learning and Teaching, (California State University)

Let me know if you would like me to look at others or to let me know about your experience with online course evaluation rubrics (the good, the bad, and the ugly). I would love to hear from you.

Posted in education | Tagged , , , , , | Leave a comment

Nomic: Games in Adult Basic Education

Glass Bead Game novel cover. I have not posted here in a while. I have moved back up to Washington State, set up an education consulting business, and am, gratefully, back teaching again. I have been very focused on my teaching at Green River Community College where I am teaching ABE (Language Arts/Social Studies and Language Arts/Science). I have always loved teaching at this level because this is where teachers can make a huge difference in the lives of others. Teach students how to read “The Compleat Angler” and you have fed them for a day. Teach them how to write an effective cover letter for a job and you can feed them and their families for a life time.

One of the cool things that I have had the opportunity to do is to update the Nomic game I used to use in English 101 and 102 for ABE. If you are not familiar with Nomic, I have posted here a few times earlier about it. I had the help of Jacqui Cain, who has a certificate in reading, my own humble experience, and the assistance of three students who read it for our three C’s: clarity, cohesion, and conciseness. The link to the old posting will give you the old game. The new version can be downloaded here from my site.

Okay, so I am interested in two things – any thoughts you may have on this game AND if you are using games in ABE or developing education in your classroom, I would love to hear about it.

Posted in teaching | Tagged , , , , , | Leave a comment