It's that time of the semester, coming up quickly, when I'll have my first year writing students do their first peer review/editing session together.
Most writing instructors at the college level that I know use peer review/editing with our typical first year writing classes. It's part of the process oriented teaching of writing that most of us do. And it can be great. Or it can be useless. It all depends how much the students put into the effort, and what I can do to encourage their efforts.
I use peer review for four basic reasons. First, I think students benefit from having feedback on their papers, especially feedback from peers. Second, I think students learn about writing and communication by practicing critical reading and response skills. Third, even if they do a lousy draft for peer editing, their final draft will be a lot better than without. And finally, by making peer reviewing an integral part of the writing process, I can make plagiarism less attractive than otherwise. (In an ideal world, I'd have time to sit with every single student to give feedback on drafts. Alas, I don't live in an ideal world. I do give feedback to students who come to my office hours as much as I can.)
I think students figure out quickly that getting good feedback on their work helps them write a better paper. The difficulty is that many students don't quite understand their responsibility to give good feedback to their peers, or they're not sure what useful feedback is or how to give it. Nor do they realize how much they stand to benefit from practicing critical reading and response skills. (I think they understand that even a bad rough draft will make their final draft better; and I don't really worry about whether they realize that part of the purpose for peer review is to discourage plagiarism.)
What I want to focus on here, therefore, is how to encourage students to take the critical reading and response responsibility seriously, how to get them good feedback about their responses, and how to help them learn to respond more usefully.
Typically, my peer review days work like this: I group the students into small groups, preferably with three or four students. The pure numbers mean I couldn't sit with every group even if I wanted to, and I'll admit that I don't generally sit in any groups, unless a group seems to be having great difficulty.
Students must bring a copy of their draft for everyone in their group. Each student has a turn, and each student begins by reading his/her essay aloud while his/her peers read along on their copies, making marks or questions. Then the peers provide verbal feedback (I hope) using the form I provide ahead of time which asks them to focus on the big picture, on areas where their peers can develop ideas further, and on areas they find confusing. (I specifically ask them not to worry about grammar or spelling issues, after explaining that if they do their job, their papers will change in such large ways that sentences will change; I also set up proofreading before they turn in the final draft.)
So far, I think what I do is pretty much what most of us do. Here's where I've found an additional twist: the next day, each peer must bring in two copies of a written review which focuses on the big issues again, and gives specific feedback.
When they arrive in class, they give both copies to their peer. The peer reads the review, and on both copies, underlines the parts that seem most useful, and then numbers those parts most to least useful. On ONE of those copies, the peer writes a response to the peer reviewer, explaining what s/he finds most useful, and making suggestions for what would be more useful. They hand that copy into me. They take the other copy home to work on their revisions.
This is where I end up with some extra work, but the extra is manageable, and pays off.
I basically read quickly through responses. I make checks or write "good" or "good point" in the margins as applicable. During the early part of the term, I tend to write a quick note asking them not to focus on spelling or grammar if they're doing that, or suggesting that they make specific suggestions (in other words, say which paragraph needs work), and noting that asking questions may be a good way of helping their peer. Then I take a quick glance at the peer's response, focusing especially on the suggestions, underlining the best ones and writing "yes!" or "good idea" where that makes sense. Finally, I grade them on a 1-10 scale, and record those grades for each student.
I return the graded response to the peer reviewer. The upside is that s/he gets usually pretty good feedback about his/her response from the peer, with a little additional feedback from me. That gives them a good opportunity to learn about giving better responses. They also receive a grade, and for students motivated primarily by grades, that provides an incentive.
The next upside is that I learn something important about my students as reviewers and responders, and I use that feedback to help me set up peer review groups for the rest of the term.
Here's how that works: I don't have much clue about my students' writing skills before I grade the first essay; I read a diagnostic, but when I do, I'm worried about BIG problems, dyslexia, sentence structures that just don't work, and so on, so I can refer those students for specific tutoring. So I randomly assign the first peer review groups.
For the second and third essays, I make a quick chart ranking the students' previous peer reviewing grades, and organize each group so that each has one strong, one moderate, and one weaker reviewer. (These grades usually match essay grades roughly, but there are enough surprises that it's worthwhile to use the different grade.) I also change the groups with each essay. My goal is that the strongest reviewers will get some leadership opportunities, and the less strong reviewers will get some good modeling.
For the fourth essay, I again make a quick chart ranking the students' previous peer reviewing grades, but this time I organize the student so that the strongest reviewers are working together. That rewards the students who've been giving good feedback with equally good feedback on the paper which is more heavily weighted in the grading scheme. Students who've become better reviewers get matched with students who are doing a similarly good job reviewing. (Most students do a good job by this time of the term.)
By this time of the term, the people who aren't doing a pretty good job reviewing are generally the people who aren't making a real effort in the class. Often, that means they're missing class, or not bringing in real rough drafts. And grouping those students means that the people who are making a good faith effort don't have to deal with missing peers on editing days, or peers who don't bother to bring in a real draft.
For the final essay (I assign five during the term), I ask for student feedback by email about anyone they really want to be grouped with, or anyone they don't want to be grouped with. The emails are confidential, and they don't have to give me a reason. Most students don't provide feedback, but I try to use the feedback I get as I start making up groups. I finish by using my chart and putting the strongest reviewers together. This is their final essay, and I really want the people who've put in good effort at reviewing to benefit from the good efforts of their peers. By this time, most students have been grouped at some point with everyone else in the class, so they've had a chance to get to know each other and form a sense of community.
I arrange two peer review sessions for the final essay because it's a research essay, quite a bit longer than the others they've done for class, and a major part of their grade. I think it pays off. But I don't have students hand in the second written response.
The coolest email feedbacks directly refer to what a good job a peer does reviewing essays; and very often, two peers request to be in the same group for the same reason, even though I don't have much sense of them being pals or friends outside of class.
At the end of the term, I average the response grades, and make a small adjustment for anyone who's dramatically improved. In general, the only time response grades "hurt" a student's overall grade is when the student has missed peer review. Those who've done a seriously bad job as reviewers usually have similarly less than stellar essay, quiz, and journal grades, so the review grade just reinforces the sense of their lack of effort. Some weaker writers do a surprisingly good job peer reviewing because they willingly ask questions about their peers' work, and thus provide really helpful feedback.
At this point, I'm sort of happy with the way my system works. Students find it confusing at first (especially the two copies of their review bit), but most do improve as reviewers. Mostly, I don't like having to read an extra 30-60 pages of student work (including handwriting) for each essay assignment (though I am pretty fast about them).
Still, I could use better ideas! Got any to share?