Article: Gulley, Beth. “Feedback On Developmental Writing Students’ First Drafts.” Journal Of Developmental Education 36.1 (2012): 16-36. Academic Search Premier. Web. 18 May 2013. (available from our library database!)
In this article, Professor Gulley seeks to answer whether receiving oral, written, or a combination of both types of feedback will be most helpful for developmental students. Over three semesters, she worked with four different classes of developmental writers at a community college on a narrative paragraph (their first assignment). The 70 voluntary-participant students had the chance to have either a conference with her where they received only verbal comments, a conference where they received written and verbal comments, or no conference but e-mailed written comments. Gulley used a standard form very similar to the paragraph evaluation form we use in our department to guide her feedback.
She discussed, first, the confusing history of providing feedback, paying particular attention to the four-page article by Mary Hiatt from 1975, “Students at Bay: The Myth of Feedback,” which argues that conferences actually discourage student writers by making those with low-level skills feel attacked and/or by providing means for a teacher to take over a student’s paper. She also discussed some studies that show students generally improve their work after conferences, though these were not conducted with a focus on developmental students (for instance, one study was conducted at Harvard – a hot bed of developmental education).
Her hypothesis was that all forms of feedback are equal; in other words, she thought the result would be that students would revise similarly no matter how they had interacted with her. To judge this, she had all of the paragraphs in first- and final-draft form graded by two outside readers and an electronic grammar/punctuation editing program. The students’ scores on the COMPASS test (similar to Accuplacer) was used to control for initial level of skill, meaning that the goal was to judge overall improvement, not to figure out a raw score of what type of feedback led to the most A grades.
What Gulley found was two-fold: First, she confirmed her hypothesis, finding that the type of feedback used had no measurable effect upon the improvement of final drafts. All students improved their pieces in organization and content between drafts at an equal measure no matter what type of feedback they had received.
Second, Gulley found that the final drafts had statistically more grammatical errors than the initial drafts (no matter the type of feedback included).
Her conclusion is that teachers should model their feedback method on their own and their students’ learning styles. The implication of the admittedly small research study here seems to be that teachers can choose what they’re most comfortable with and proceed; I can also see how this would be a useful study to encourage consideration of a multiple-method or tailored approach to providing feedback.
I’d be very interested to see this type of study extended to include a comparison between feedback presented in hard copy and feedback presented electronically (in either type-written comments or audio comments that a student can download). Currently, I’m working with the TurnItIn anti-plagiarism software at LBCC, which provides the option to give students feedback on the paper in small, line-edit comments, in a written end-note, in recorded verbal comments that the student can download to listen to, and in a final grading rubric. I think the idea of providing individual audio feedback to students could be very valuable to developmental writers, as – and Gulley cites this here – it’s been documented to leave students with a more positive feeling about the feedback they’re receiving and, as a result, with more confidence about the skills they can improve.
Finally, I found her results about grammar and punctuation to be even more interesting, in part because I’ve noticed this same result among my own students and, in particular, among those students who seek in-person feedback through Tutor Central. Visiting with someone else about their writing will often produce a marked improvement in content, organization, and general development. However, fitting in all of those new ideas often leads students to use sentence forms that are more complicated, which often leads to very confusing grammatical structures, twisted syntax, and, that bane of all final drafts, faulty proofreading. I was glad to see this quantified here, in part because I have trouble convincing students that an increase in mechanical errors can sometimes be a sign of advancement. Because punctuation errors are often the most visible on a document, students see an increase in these “red marks” as a sign of certain failure instead of recognizing them as a very small percentage of an overall evaluation of thought and composition.
Anyone else have thoughts about the value of conferences vs. written feedback or the combination? I’ve found I’m using conferences these days mostly to have a face-to-face check in with students; it’s often the only way to find out if the quiet student in the back is lost because she’s so behind or bored because she’s so advanced.