Sunday, May 14, 2006

Finals - thoughts on academic ancestors

In the ed biz, it's time to grade like a demon and yes, write final exams. (Unless you're lucky enough to be completely finished with your term, in which case, you're not reading this anyways.)

Depending on the course, I have a couple different exam formats I write. Whichever format I use, I have a couple goals for every exam, some of which I achieve, and some I just aspire to.

From most mundane to most important:

I try to write an exam that a student who's never bothered coming to class or doing the reading will fail miserably. (Yes, sure, someone who's already expert in the field should do just fine, but my students don't come into my classes as experts, or they wouldn't be in my classes.) (I think I generally succeed pretty well at this part.)

I try to write an exam that a student who's come to class, done the reading, and put forth a solid effort in preparation and studying should be able to show what they've learned and pass. (I generally do okay here, too. An exam isn't about showing that I can trick people, or that I'm smarter than they are. I could trick them; what's the point? And I'm not smarter than many of my students, so trying to demonstrate that I am would be a waste of effort.) As a corollary, I want a really stellar student to be able to show how stellar s/he is.

I try to write an exam that will be a useful learning experience in and of itself. I want it to help students put together the information they've been learning so that it makes better sense to them (a goal of the comprehensive essay part). In an ideal world, the student who's been working hard all semester will realize through preparing and taking the exam that s/he's learned a whole lot, and that the hard work was totally worth the effort. (I don't think I succeed here too often.)

The exam style I use most often for Shakespeare or lit classes looks basically like this:

Short identification section - terms, concepts, dates, etc. Write a 1-3 sentence definition and connect the term/date to something we've read or discussed in class. There's always some choice in this section, do 6 of 7 or something, so that someone can forget a term and still do well.

Passages - from whatever text(s) we've been working on. Identify the text, speaker, and context in a few sentences. Write a short explication of the passage focusing on some aspect of word choice, imagery, concepts, poetics, etc. Don't paraphrase, but do feel free to make connections to other parts of the text, or to other texts from class. Again, there's always some choice in this section.

The final looks the same, except there's also an essay section which tries to get students to make connections across the whole semester. Again, I give two essay choices, and each student chooses the one s/he wants to write.

The format, of course, isn't something I made up myself. Nope, in fact, I adopted it from my graduate school mentor, for whom I TA'd a number of classes (as well as grading other classes). My graduate school mentor adopted it from his graduate school mentor. And I'm guessing that person adopted it from her graduate school mentor, though I can't be sure.

So, writing my Shakespeare exams, I have an academic grandmother who I've never met (and probably never will), but I use her exam format.

Ideally, I suppose, adopting teaching techniques and strategies comes from the apprenticeship aspect of graduate school teaching. (At any rate, I don't remember any pedagogy classes teaching me to write exams.) The exam format works well enough for me in general that I continue to use it. It's somewhat a pain to grade, though since my grad school mentor and his were both in PhD programs with lots of grad students available for grunt work, I suppose they didn't much worry about that. And it's no worse to grade than a lot of other formats I've seen.

I was talking to a colleague in another field recently about exams; she's in a field where multiple guess exams are pretty standard, and so she gets to grade at least part of her finals via a scantron machine. I realized that I'd have no clue how to write a decent multiple guess exam. But she learned through her apprenticeship not only how to write them, but that they're appropriate to her field.

Since my apprenticeship was moderately long (3+ years of TAing in various ways, and more grading for additional classes), I had lots of opportunities to pick up exam formats, watch others teach (which, of course, I'd been doing since I was 5 years old). And there are other formats I use occasionally, especially when I want to challenge students in a different way. One of the best is to give students a group of passages from texts, and ask them to explicate one fully, while referencing another. That one works well as a take home exam, especially in a class where there's a fair overlap of conceptual or stylistic work, such as a single author type class.

Even though I picked up exam formats from my various exemplars in graduate school, we almost never talked about exam writing as such. I didn't think to ask most of my professors why they wrote exams the way they did, or what they wanted to accomplish with exams. I just wasn't "there" yet, wasn't ready to think about teaching that fully. It's a shame, really.

And my graduate program really didn't encourage students to talk about teaching as such, to talk about what we wanted our students to learn or reveal by taking exams. (We did, on occasion, talk about such things, but not out in the open.) Our professors rarely talked to us about teaching per se, either.

Now, when I'm so much more ready to think about strategies for teaching, I'd love to go back and ask the best teachers from my past about their strategies and their exams. I wonder, though, how conscious some of them were about what they were doing? Or how much they just adopted their mentors' formats, assuming that the mentor had thought things through better than they could?


  1. That is pretty much the format my Col History through 1776 professor did our tests. Before that I was very used to part multi-choice and part essay with the essay being worth more.

  2. Yes, this is something that I've often though about, too.

    Partly, I think we in the humanities tend to give exams like this because they are efficient. They really are. I have tried to figure out what else I could do in one hundred minutes that would do what I need an exam for lower-div survey courses to do. They work.

    Yet, there is the mighty inertia of tradition. I, too, saw nothing else in my TA /grading years. At that point, I never thought to ask.

    Lately, though, in some classes, I have been giving process-oriented exams, i.e. exams that present students with a problem or "real" scenario to work through. This require students to draw on their whole semester's experience.

    For majors in an upper-div classics course, this might be something like giving them a choice of two passages (both not read in the course: one from the same author/genre; the other from the same period but different author/genre) for which they write a commentary, one that is political/historical, or social/gender-focused, or rhetorical/philological or whatever the course may have treated. It must relate to primary and secondary readings, etc. Students tend to take close to the maximum time and work pretty hard. C students write what one would predict. Students who haven't come to class or done the work are all at sea. A students come up with some interesting material, and they feel as if they put their learning to the test, not their review. They leave the exam tired, but in high spirits, feeling very ready for the next class.

    It works for those sorts of classes, but I haven't quite figured out how to do that with lower-div survey courses in translation. Still doing the ol' usual. It's OK, though.

    And, my colleagues don't talk about these things either, unless it's about online assessment (so that they don't have to come to campus).

  3. I teach Shakespeare, too, and my exams look exactly like what you've described. Maybe we're related?

  4. Anonymous3:30 PM

    Interesting thoughts. I think I agree, but then I'm curious to know your thoughts on something else, which has actually been on my mind for awhile now. I don't know if you have, and maybe you've already done so, but have you considered the possibility of exchanging essay-exams for papers as perhaps a more accurate (fair?) evaluation of students' overally comphrenesion (and so including synthesis) of a given amount of material?

    It's something I've done, myself, and I've to say that, overall, I'm now much more pleased (and soconfident in my final evaluations) with my students' performances. More than a couple have actually mentioned to me, in person or on evals (and I guess these types of comments should be taken with a grain of salt, as who knows what their motive(s) may ultimately be, but then I think their performances should be granted some credibility, too, regardless of motive) that they prefer papers over exams as the more valid measure of whether they're meeting a given courses' ends or objectives (or, in their non-administrative language, 'papers give me a better opportunity to show you what I can do'--and, of course, what someone 'can't do,' all the more reason for their validity, I say).

    I guess what the question ultimately comes down to is the question of what we're we're ultimately after (or what we believe or tell ourselves we're after?) in giving these two very different tests and whether in fact what we're after is (should be?) best measured by one test over another (or if a given test is really the best measurement of what we're looking for in that type of test?).

    Though I do think, and guardedly, admittedly, that essay exams have their merits and place in the literature classroom, for a couple of the reasons you mention, and can be a measure of sorts of certain abilities and skills, I just have to wonder if those abilities and skills might more faithfully be evaluated (or shine brighter) in a more holistic realm, a realm 'holistic' inasmuch as those skills and abilities could be included and compared with other abilities--a realm found in a paper, say. Or, put otherwise, I'm not sure that the results found or looked for in an essay exam are necessarily the most valid (or strong?) reflection of what one would be after in such an evaluation (in a literature class)?

    Granted, I prefer papers over esaays, and if possible do exclude exams entirely. I'm responding, I guess, to a certain sense (mine?) that, while papers may be best overall indicator of ability and understanding, a timed exam is perceived as the more immediate (and so, by this logic, somehow more sincere) reflection of students' understanding, since what these exams gauge is indeed just that: students immediate comprehension and synthesis of 'material at hand.'

    And, I guess, that's fine, if that's the criterion one uses; if we want a test of their abilities to gauge material at hand, exam is the way to go. But, well, I've to wonder if this type of discussion doesn't invite the kind of mismatch that in some sense always creeps up (and is the very reason for) these types of convos and discussions.

    Essay exams are good measure of the things you mentioned (short answer, identification), but I wonder if it's really that good an evaluation of the more synthetic and higher-level analysis skills which are to be found (and measured and rewarded) on a lit. class, skills which, precisely because they are higher-level and ostensibly more sophisticated, will likely already include, though certainly not be delimited to, the types of questions anticipated on exams.

    So, I guess what I'm saying is, why give two (very different) tests to measure what can reasonably (and generally more than fairly) be expected to be found on one? (assumption here, of course, is that such analysis, because it is asking for synthesis and comparison, will, of necessity, take more time to flesh out than the types of q's asked for in an exam-type setting--but then because it is, it'll likely reflect and demonstrate (of necessity) precisely those skills sought after in a sitting-exam.

    Time pressure is always going to be an issue, for sure, whether writing for a paper or for an exam (duh). But, well, and assuming we retain some choice or say in the administration of our particular 'tests,' why make this pressure even more palpable than it already is? And why do it especially if what we're (lit type folks!) after is ultimately less a test of one's manipulation of time than a measurement of their astute and acute intellectual acumen over it?