Self-evaluation of rater bias in written composition assessment
DOI:
https://doi.org/10.4312/linguistica.54.1.261-275Keywords:
rater bias, types of rater bias, self-evaluation, increased awareness, reliable and valid assessmentAbstract
No assessment is entirely free of bias. This paper presents findings concerning the way raters in the research group evaluate the extent to which they are influenced by various types of rater bias when grading their students’ written compositions. The sources of bias covered in the article include the teacher’s knowing the student writer and his or her proficiency in English, the difficulty of the writing task, distressful content likely to trigger the rater’s emotional reaction, the test taker’s views clashing with those of the rater, students’ progress, and the like. The data were gathered by the participants in the study via a questionnaire. In addition, the researcher’s interpretation of the respondents’ answers was verified through interviews. Although the two research methods and self-evaluation have their drawbacks, the results reveal interesting, relevant and important information on aspects which make written composition assessment less reliable and valid. The findings confirm the need to raise raters’ awareness of the causes of bias to which they are most susceptible, bringing them closer to effectively addressing the problem of assessment bias. The research involving eleven lecturers teaching Language in Use at the Department of English and American Studies at the Faculty of Arts, University of Ljubljana, is a part of a much larger project based on the author’s PhD thesis.