Cult of Pedagogy Search

Rubric Repair: 5 Changes that Get Results

March 17, 2019

Mark Wise


Can't find what you are looking for? Contact Us

Listen to my interview with Mark Wise (transcript):

Sponsored by Pear Deck and Microsoft Teams for Education

In my 20-year career as an administrator, I’ve had the opportunity to be a fly on the wall in hundreds of classrooms across grade levels and content areas. While much has changed in education over those years, one element has remained constant: the well-intentioned use of rubrics with varying levels of success.

Starting with their use as early as kindergarten and continuing through high school, rubrics are meant to clarify expectations, but poor design can make the experience anything but clear. Rows of criteria describe, often in 10-point font, how students will be graded for an upcoming project. Students attempt to decipher the details, review the exemplars provided, and note the corresponding due dates, but the information doesn’t always translate into action.

This is unfortunate because a well-designed rubric can be more than just an evaluation tool. For the teacher, it can clarify expectations, which increases the likelihood that these traits will be attended to during instruction and as a result, can provide more targeted feedback. For the learner, knowing what is expected from the start along with clear indicators of progress provides an effective means to self-assess, make adjustments, and improve the quality of performance.

So how do we get there? How do we take our current rubrics and fine-tune them so they deliver on those promises? These five guidelines will help.

1. Measure What Really Matters

Teachers often include multiple criteria in their rubrics without adding the primary learning outcome they are most interested in: Did it persuade? Did it create an emotional connection? Did you win your argument? We tend to construct our rubrics with important but sometimes peripheral components of performance because they are easy to see, count, or score (Wiggins and McTighe, p. 34). For example, if your students are writing an editorial, it could be stylistically sophisticated, well-organized, and meet the length requirement. However, an editorial, regardless of how well written, is wholly ineffective if it fails to persuade the reader.

Moreover, since students can become overwhelmed by the sheer number of criteria they are required to meet, less can be more. One way to accomplish this is to utilize a single-point rubric, which allows the student to focus on the stated expectations while receiving feedback on the degree to which they are meeting them. If you are using a more traditional 4-6 point rubric that details the continuum of performance, it is even more incumbent upon you to identify the main reason students are engaged in the task in the first place and edit down the criteria so it incentivizes students to focus on the most important aspects of the performance.

The “before” example below shows a rubric that measures a lot of things that don’t have anything to do with whether a student understands the lunar phases: things like creativity, attractiveness, and whether the task is completed on time. If the purpose of the assignment is to assess whether students have really learned the lunar phases, the rubric should focus primarily on whether the content in the model is accurate and effective.

In the revised version below, all of the criteria is focused on the quality and accuracy of the information in the model and measures the desired scientific thinking from building the model in the first place.

2. Weigh the Criteria Appropriately

As designers of rubrics, we can signal to students that certain criteria matter more than others. Just because a rubric has four criteria, doesn’t mean that each needs to be worth 25 percent of the score. With the weight of each criterion adjusted, the rubric itself guides students to focus on what is most important.

In the example below—which is the same revised rubric from above, but where the teacher wanted to include some accountability for mechanics—all four criteria are weighted exactly the same. This means a student who demonstrates a perfect understanding of the science behind lunar phases, but who struggles with spelling and punctuation, could end up with a C on the project. That would not be a true reflection of mastery.

By contrast, in the revised rubric below, “Mechanics” is only assigned 10 percent of the overall grade, while the other three criteria make up 90 percent combined. This way, the final grade will be a much more reliable measure of student understanding of lunar phases.

Another thing to keep in mind is that each criterion doesn’t have to be graded every time or the same way. Within the structure of the rubric, teachers have a great deal of flexibility. Although all of the criteria are important, we can use our discretion by grading only certain items, then attending to the others when those skills or concepts are formally taught and practiced. Likewise, since our expectations of what students can accomplish at the beginning of the year are quite different from what we expect at the end of the year, we can continue to adjust the grade or point values we assign to each column.

3. Check Your Math

Your point values for each column need to yield an accurate reflection of the student’s performance. For example, if on a 4-point rubric the “3” is “meets expectations” most teachers believe the point value should reflect a range from A- to B+. This stands in contrast to a C (3 out of 4 points) which is what would be earned if a student met expectations for the criteria within that column. This can result in either the teacher giving an unfair grade or altering the feedback in order to generate the desired grade. Either result isn’t helpful for the student.

A design tip is to look at each column vertically and choose a number or range that would be appropriate for a student scoring exclusively within that column. The corresponding grades don’t have to reflect a neat 4, 3, 2 progression. Often using decimals or a range of numbers is necessary to align each column vertically to a grade that matches the column’s descriptors.

4. Can Do Rubrics, Not Can’t Do Rubrics

We need to consider the language we choose so that our rubrics encourage students to improve. Without realizing it, when teachers detail the levels of performance, we tend to use degrees of deficiency (e.g. mostly, somewhat, lacking) rather than affirmative non-judgmental statements as to what the students are capable of at each point along the continuum. If we are going to truly use the rubric as a tool to enhance students’ ability to self-assess and thus enhance their performance, we must provide clear markers along the way for how students can improve and not unintentionally send the message that their ongoing work is insufficient rather than on a path of progress.

The examples below show two rows in a rubric for a research project. The first uses deficit language to describe the lower levels of performance, while the second describes each level in terms of what the student CAN do.

A real-world model that clearly illustrates a learner’s progression toward achieving proficiency is the traditional swim chart which indicates where the swimmer is in on the path toward achieving independence in the deep end of the pool. The language of the swim chart describes traits—from treading water to extended front crawl—that are required for the learner to move from Tadpole (beginner) to Seal (advanced). The expectations are clear, measurable, and non-judgemental; it describes what swimmers can do rather than what they can’t. We can view the transparency and progression of the swim chart as an aspirational goal for our rubric design, being mindful to choose language that states the desired outcome, rather than anticipated problems.

Educators are already working toward this in the area of world languages. The American Council on the Teaching of Foreign Languages (ACTFL) has performance descriptors that describe the learner’s degree of communication fluency. ACTFL has also developed “Can Do” statements that describe what language learners can do consistently and independently. Like the swim chart, these indicators allow learners to use the statements for self-evaluation so they are more aware of what they know and can do in the target language. In turn, world language educators have developed rubrics that match those desired outcomes.

5. Models, Models, Models!

When we design rubrics, we tend to pore over their construction. We perseverate over the language we use (“should I say ‘clearly.’.. or ‘distinctly’?”), repeatedly delete rows or columns, and painstakingly choose fonts.

From the student perspective, they experience the rubric differently. After being introduced to the rubric at the start of the project, the next time students typically see it is when it is returned, replete with circled boxes, teacher comments, and a final grade attached. Understandably, many students suffer from “rubric fatigue,” a condition caused by encountering a series of disconnected rubrics across subject areas on any given day.

What students really benefit from are actual models (both exemplars and non-exemplars) that link to descriptors on the rubric that illustrate the quality of work expected. For example, in the real world we typically have many models of the performance or product we are trying to create or implement—whether it is for how a game should be played, a song should be sung, or an editorial should be written. Similarly, when launching a project, showing students multiple examples of prior student work or appropriate real-world examples makes the rubric meaningful and brings these descriptors to life.

Another strategy is to have students sort the various models in order to determine the specific qualities that make some examples stronger than others. The teacher can then incorporate the students’ language in their draft version of the rubric or highlight aspects the teacher may have initially overlooked before distributing the final rubric. This process allows the students themselves to generate the criteria and descriptive language of the desired performance which deepens their understanding and creates shared ownership of the expectations for quality work.

Similarly, the rubric can become more user-friendly if we make the criteria more visible to students. This can be done by adding hyperlinks of models tied to the descriptors so students can access a range of examples to inform where they are along the continuum and show them exactly how they can improve.

Better Rubrics Support Student-Centered Learning

As educators, we are rightfully drawn to the goal of establishing a classroom where students have opportunities to engage in complex problem solving, participate more actively in dialogue and debate, design their own experiments, or research topics that interest them. These types of experiences—ones that do not result in a single “correct” answer or follow a formulaic procedure—require more of an open-ended assessment tool with clear guidelines for success that students can use to guide, self-assess, and ultimately improve their learning. Hopefully by following these five suggestions, your rubrics will help clarify what you really want students to take away from the experience, provide your learners with the means to get there, and allow you to fairly and honestly assess their performance. ♦


Wiggins, G., & McTighe, J. (2012). The understanding by design guide to advanced concepts in creating and reviewing units. Alexandria, VA: ASCD.

Come back for more.
Join our mailing list and get weekly tips, tools, and inspiration that will make your teaching more effective and fun. You’ll get access to our members-only library of free downloads, including 20 Ways to Cut Your Grading Time in Half, the e-booklet that has helped thousands of teachers save time on grading. Over 50,000 teachers have already joined—come on in.


  1. Peggy Larson says:

    Thank you for this! I am one of the teachers who always struggles with using a rubric–because it tends to be punitive instead of helpful and students rarely use it. I have instead been using a type of pre-rubric where I spell out specific requirements and the point value…

    I also really like the idea of single point rubric as well…as long as we are clear on our expectations for each assignment, we are giving students the opportunity to both meet and exceed these expectations!

    Thank you!

    Peggy Larson

    • Arjan Harjani says:

      Very helpful, clear, easy to follow and modify. I know it is time consuming but I do totally agree that better rubrics and models generate better products from students. For me, it continues to be work-in-progress and a drive to be a better facilitator and driver for and toward excellence.
      Thank you Jennifer for all your expertise and collaborating with other experts in education. Thanks to Mark Wise as well and to the pioneers, Wiggins & McTighe.

  2. Susan Grant-Suttie says:

    When making rubrics, I always put the language in grade level talk. So often I find rubrics in teacher pedigogical talk which is unfair to the child who is trying to do their best but cannot understand the request. Further more, I always supply a model – even of an essay – with the defined points expected eg: This is a type of thesis statement “…” My students understand what to do, I know exactly how to compare (if necessary) and mark.

  3. Jeffrey Spafford says:

    Thank you for this! I was just talking with some colleagues on how subjective rubrics can be, and these applicable tips really take the subjectivity out entirely! Also, I like the before/after visuals and models, and now, I have a much clearer objective when designing my next rubrics. Thanks again, great read!

  4. I can’t help to think that assessments which emphasize mastery, rather than compliance, also serve to feed students’ understanding that school is for learning. Until we change a prevailing, “I don’t care, I just want to get it done and get a good grade,” attitude, we can’t hope to fix education. Along the same line, I’m especially interested in assessing students’ development of dispositions, along with skills and understandings, as learning objectives. I’d love to hear your thoughts on making this happen in an effective manner!

  5. Cinnamon Kern says:

    Rubrics are most efficient with my seniors if the closest column to the general descriptor is the high score, not the low score. Also, a standardized 4-point score makes it much easier to train students over the year to use rubrics for their own feedback and adjustment before I ever get there.

  6. Natalie Chilese says:

    I have a question that I have struggled with when making rubrics: I know that projects always come out better when examples/models are offered, but if I am creating a new project, how can I provide these without falsely creating my own? Should I just allow the first year of a project to be a little looser?

    • Eric Wenninger says:

      This is a good question! Doing a project for the first time makes incorporating past students’ models difficult. Whenever I have created a new project for students, I always use my own writing and examples as models. If you are having students practice using a rubric with work that represents various levels, you can create your own. While these aren’t authentic student examples, you are still modeling the difference in skills attainment that students are figuring out through their use of the rubric. I think that your examples still give students the benefit of having models.

  7. Heather Dow says:

    Thank you for this well written and tangible blog posting on rubrics! I am a high school science teacher in British Columbia and often have to remind myself, what really matters? Our province has recently implemented a new curriculum, and my school is heavily focused on new assessment models to ensure that when we take marks it is specifically to measure their competency in specific curricular competencies, and not “the fluff.”

    Yes, we are encouraged to take a more flexible, personalized approach with our assessment methods, but we should NOT be taking marks for attractiveness and creativity since that doesn’t measure their understanding of the material.

    I am trying to do an overhaul of my assessment techniques (which are rubric heavy!), and your post helped give me some clarity on how to improve my practice. So thank you!

    One question I have for you… do you think it’s helpful for students to fill out their own “Self Rubric” before handing a project or lab in? Does it help to ensure they actually read the criteria, or is it a waste of time? I’ve done this before with my students, but I never actually do anything with their “self rubric,” and it usually just gets lost in the shuffle.

    Thanks for your thoughts!


    • So glad to hear you liked the post, Heather! Check out this article – I think it will be really helpful, especially sections 3 and 4.

      Also, if you’re not yet familiar with Peergrade, take a look!

      Hope this helps.

  8. Norkhalilie Zulkifli says:

    Thank you. This is really a good sharing. Often, we misunderstood and misused the rubrics. This has somehow opened my views on how to come out with good reflective meaningful rubrics.

  9. Pedagogy & Pugs says:

    Thanks for sharing these wonderful tips! I’ve been engaging in dialogue regarding verbiage used in rubrics and am curious what the recommendations (and ideally research-based practices) are for using the second person in rubrics (e.g., Your response was on topic and well-supported with details and examples from our lecture). This could be considered student-centered, but the counterargument is that “Your” could provoke defensive reactions to grading. Looking forward to [your] thoughts! 🙂

Leave a Reply

Your email address will not be published.