by Dr. Suzanne L. Pieper, Coordinator of Assessment
An essay by Vincent Tinto in Inside Higher Ed reminds us that college success is built “one class and one course at a time.” Tinto further points out that in successful college classrooms, students get frequent, high-quality feedback on their products and performances. One way instructors can provide that feedback is by using rubrics to assess student work. Rubrics are efficient for making instructors’ expectations explicit and promoting fairness and consistency.
Why don’t more instructors use rubrics? A major obstacle is the amount of time it takes to construct a good one. A tempting shortcut is to choose a rubric from the wide variety of those online and in print. But how do you know if the rubric you choose is a good one? And how do you know if it will work for you and your students? Three questions will help you to choose and use the best rubric for your class.
What am I looking for?
Think about three to five criteria that you could use to assess student responses to a performance task. In What’s Wrong—and What’s Right—with Rubrics, W. James Popham says it’s tempting to describe all possible assessment criteria, but it’s best to keep your rubric brief. When designing or selecting a rubric, ask yourself, “What are the most important elements of this assignment that demonstrate student learning?”
Let’s imagine that you want students to be able to write a research paper. You find an online rubric that lists the following criteria for assessing a research paper:
- Purpose
- Content
- Organization
- Feel
- Tone
- Sentence structure
- Word choice
- Mechanics
- Use of references
- Quality of references
- Use of most recent edition of MLA Handbook for Writers of Research Papers
You wonder if you want to assess twelve criteria. You also notice that some of the criteria aren’t distinct. What is “feel,” and how does it differ from “tone”? When you look at the intended learning outcomes for your course, and you scrutinize the research paper assignment you’ve given to your students, you discover that five criteria are most important:
- Purpose
- Organization
- Content
- Mechanics
- Use of references
Now you have a manageable number of rubric criteria that will guide your students to improve their skills in writing research papers. An added bonus: you can use these same criteria for a variety of writing assignments in your course.
What is the possible range of student products/performances?
Rubrics need to accommodate the entire range of possible student responses, but how do you decide how many separate levels of performance you want to recognize in your rubric? The best way is to review actual student work. Start by sorting the work into upper range and lower range responses, and then further sort the work as needed. How many “piles” do you have? The number of piles should give you an idea of how many performance levels you will need in your rubric. Each performance level needs to be clearly distinct from the next so that there is no question about which level a particular piece of student work meets. Most rubrics allow three to five levels; if you include more levels, you might find it difficult to clearly distinguish the levels. You also want to think about how to label the performance levels. The labels should make clear the distinctions among levels but not discourage students. Here are some examples.
Let’s say you want to assess discussion posts in your online course. You know what you’re looking for, and you’ve identified five levels of student performances from your review of prior student discussions. However, you aren’t sure how to describe the levels of student performance. Say you find a collection of performance level descriptions online, and you think one set of descriptions shows promise: accomplished, advancing, developing, beginning, no concept.
Ask yourself how you would distinguish between “beginning” and “developing.” Also think about how your students might react to being described as having “no concept.” You realize that the set of descriptions seems to apply to student development rather than to the students’ work, so you instead might settle on five different performance levels for online discussion: excellent, good, average, fair, and poor. These levels reflect the range of your students’ performances, make it possible to distinguish between performance levels, and are not discouraging to students.
How do I describe what I am looking for at every point in the range of student products/performances?
Now that you have identified what you are looking for and the possible range of student performances, you are ready for the final step in adopting or adapting your rubric: the descriptions. The descriptions are the “meat” of the rubric because they explicitly detail what a student needs to do to get a score at each scale point. They also provide instructors with clear guidelines for improving student learning. Descriptions need to be consistent, distinct, and written “in plain English” so students can understand them.
Top Level | Middle Level | Low Level |
---|---|---|
Do X, Y, and Z | Do X and Y | Do X |
Let’s say you want to assess oral presentations in your course. You know what you’re looking for, you’ve identified and labeled three levels of student performance, but you’re not sure how to describe exactly what a student needs to do to get a score at each point in the scale. You find an oral communication rubric in a resource book that at first glance looks perfect! When you inspect the rubric more carefully, however, you notice that the descriptions don’t always focus on the same characteristics across performance levels. For example, you want to assess pacing, yet pacing is described as “paced for audience understanding” in the high performance level, “sometimes too fast or too slow” in the middle performance level, and not mentioned at all in the low performance level.
You revise the descriptions of pacing so that the characteristic is addressed at all performance levels, and that the value of the characteristic changes in a measurable way between adjacent levels. Drawing from an example given by Robin Tierney and Marielle Simon in What’s still wrong with rubrics: Focusing on the consistency of performance criteria across scale levels, you modify the rubric so that at the high performance level, the student “always paces for audience understanding.” At the middle performance level, the student “sometimes paces for audience understanding.” At the low performance level, the student “doesn’t pace for audience understanding.” With a few more revisions, you can use your rubric to effectively assess your students’ oral presentations. Additionally, when you share the rubric with your students, they can use it to better understand your expectations and deliver higher quality presentations.
Excellent | Acceptable | Needs Improvement |
---|---|---|
Always paces for audience understanding | Sometimes paces for audience understanding | Doesn’t pace for audience understanding |
What successes have you had with designing rubrics for your courses? What challenges have you encountered? Please share your comments.
References
Popham, W. J. (1997). What’s wrong—and what’s right—with rubrics. Educational Leadership, 55, 72-75.
Tierney, Robin & Marielle Simon (2004). What’s still wrong with rubrics: focusing on the consistency of performance criteria across scale levels. Practical Assessment, Research & Evaluation, 9(2).
