Rubric Development and Inter-Rater Reliability Issues in Assessing Learning Outcomes
Abstract
This paper describes the development of rubrics that help evaluate the student's performance and relate that performance directly to the educational objectives of the program. Issues in accounting for different constituencies, selecting items for evaluation, and minimizing the time required for data analysis are discussed. Aspects of testing the rubrics for consistency between different faculty raters are presented, as well as a specific example of how inconsistencies were addressed. Finally, a consideration of the difference between course and programmatic assessment and the applicability of rubric development to each type is discussed.