Thursday, September 11, 2014

Evaluating On-line Instructors?

In the reality of things, we have some folks who only teach on-line.  And we're supposed to evaluate their teaching.

Usually, to evaluate teaching, we look at:

a teaching observation (written report by a tenured person re a class observation)
course evaluations
teaching materials provided by the candidate


To this point, we don't know how to do an observation of on-line teaching.  Does anyone have ideas about how that might be done?


To this point, on-line students don't much click and do the course evaluations. 


I'd really appreciate ideas, especially from folks who've taught or evaluated on-line teaching, for how to evaluate folks who are teaching on-line.

Thanks!

6 comments:

  1. We evaluate online courses according to how well things are organized and how easy things are to find. We have a (not-so-great) rubric for this actually. But organization is vital in an online course especially in something like Blackboard. Also the writing/communication must be evaluated. How easy-to-understand are the assignments?

    And then I'd look carefully at the Discussion Board and how the instructor responds to discussion. If the instructor has a bunch of video lectures, I'd look at those too.

    ReplyDelete
  2. Anonymous7:23 AM

    In addition to what Earnest English said above, I would also request samples of the instructor's interaction with students (comments on student work, email to students, however this is done).

    Online courses should also be evaluated according to the Quality Matters rubric.

    ReplyDelete
  3. Does the instructor have mandatory synchronous meetings with the students?

    ReplyDelete
  4. There are definitely some well-respected existing rubrics out there; Quality Matters is one of the ones I, too, have heard about. Sloan is another respected name.

    However, when evaluating "homegrown" courses, it's worth keeping in mind that an individual instructor can't always provide *all* the elements such rubrics call for, and that over-valuing such factors can lead to the privileging of big-academic-publisher-created "course packages" over locally hand-crafted (and regularly revised according to local student needs) ones (to take one example, does a course targeted to a specific semester and class really need to be accessible to people with a wide range of disabilities, if no student in that particular course has any of the those disabilities? You wouldn't design a face to face course with all possible eventualities in mind; instead, if a particular student needed particular accommodations, you'd work with the student and the disability office -- and you'd hope that the student/disability office gave you at least a bit of advance notice. That seems to me to be a fine approach to online courses as well, though I can also see the need for planning for all contingencies in a "course package" that is essentially a multimedia textbook, since often instructors can't change all that much in those packages -- which of course is one of the major downsides of such packages).

    But the rubrics are locally adaptable, and the basics Edward mentions are, indeed, important -- you want to check navigability, and clarity of instructions, and teacher/student interaction, and how well the teacher fosters student/student interaction -- and all of that should be relatively accessible to a colleague with some sort of guest account (though the instructor being reviewed may also want to provide particular sample materials, as would usually be the case for a face-to-face class). It would be better if the person doing the reviewing also had experience teaching online (an argument for making sure that online classes don't become mostly or wholly staffed by contingent faculty, or that experienced contingent faculty play a role in evaluating their colleagues). However, many students still come to online classes with no experience, so a professor who has never taken an online class isn't an entirely bad proxy for judging things like navigability and clarity of instructions.

    So, as someone who teaches pretty regularly online, I'd say that evaluating online classes is entirely doable, and that you should look at the rubrics out there, while also keeping in mind that the rubrics may be taking a particular, fairly large, kind of online class, often designed by a commercial publisher, as their ideal, and that you may want to come up with methods of evaluation that allow locally-produced and/or adapted classes to score at least equally well. (Personally, I'd like to see such courses, at least in disciplines, including English lit and composition, where small, individually-crafted and regularly-revised classes are the face to face norm, score higher, since I believe they are, in fact, better courses. Although I don't view online instruction as nearly the menace that, say, Jonathan Rees does, I do think there is some very strong pressure toward standardization, and that's there's a possibility that rubrics designed for a broad range of online courses by organizations with nationwide influence could play something of the role that standardized testing has played in K-12 instruction, strongly favoring equally standardized curricula produced by for-profit companies. There's a lot of money at stake in this game, and it makes sense to pay attention to the interests at play, and how the decisions you make might favor different stakeholders. I'd argue for decisions that support professorial autonomy as long as courses meet learning goals)

    ReplyDelete
    Replies
    1. There's a pretty good description of how the Pearson package courses work here: http://www.slate.com/articles/life/education/2014/09/online_college_classes_textbook_companies_offer_courses_with_minimal_university.html (and yes, there's a myEnglishLab or a myCompLab or something along those lines; I haven't used it, but I've received promotional emails). Some of my colleagues use such packages (usually, as far as I can tell from conversations, to teach basics, with supplementary individual or group projects that involve more critical thinking on the students' part and individual/group feedback on the instructor's part). I believe they can serve a useful purpose. But I don't think any of us want them to be the be-all and end-all of online instruction, and some of the rubrics out there seem to measure for things that such packages are far more able than individual instructors to measure, while paying relatively little attention to things like activities which teach critical thinking and effective problem-solving (and, pretty much by definition, elicit idiosyncratic student responses which require fairly detailed instructor attention and feedback).

      Delete
  5. These comments are really helpful; thank you!

    ReplyDelete