THE CULT OF PEDAGOGY PODCAST, Episode 117 TRANSCRIPT

Jennifer Gonzalez, host


If you’re a teacher — and if you’re listening to this podcast, I’m assuming you teach or have taught in some capacity at some point — you have no doubt used rubrics. These charts tell students how they will be evaluated on a given task, and although they are probably most common in English language arts classes, they can and should be used in any content area where an assignment can’t be graded with a simple score that measures correct answers.

But as common as they are, their effectiveness really runs the gamut. I’ve seen rubrics that are perfectly clear in how they outline expectations for an assignment, offering guidance to students about how to approach a task and valuable information about what to do differently the next time around. I have also seen some that are incredibly convoluted and hard to follow, others that include way too much subjective criteria, and others that pack in so much that I suspect most students never bother to read them.

It’s a shame, because the more full the world gets with ineffective rubrics, the worse their reputation will be with students, parents, and teachers. We need to shift that trend so that better, more effective rubrics become the norm, rather than the exception.

I’ve been meaning to do a post and podcast on this topic for a while, just digging into some best practices in rubric design, but before I got to it, I was contacted by  Mark Wise, a New Jersey school administrator who has spent years working with teachers on their craft. Mark had noticed the same inconsistencies with rubrics that I had, and wanted to share some of the guidelines he had developed for rubric creation with his staff. So he wrote a guest post for Cult of Pedagogy about it, and in this episode we’re going to talk about his key points — 5 changes you should consider for making your rubrics much more effective.


Before I play our conversation, I’d like to thank Microsoft for sponsoring this episode. Microsoft Teams for Education is a digital hub bringing assignments, conversations, and content all together in one place. Plan, share, and connect with students, staff, and fellow teachers across your school. Whether you’re grading and providing feedback to your student’s history project or sharing your next big idea for a lesson with your department, Teams for Education can help you and your school achieve even more. Teams for Education – making classrooms collaborative and saving time for teaching. Visit microsoft.com/education to learn more.  

Support also comes from Pear Deck, the tool that helps you supercharge student engagement. With Pear Deck, you can take any Google Slides presentation, add interactive questions or embed websites, and send it to student devices so they can participate in real time while you present. And now Pear Deck has teamed up with Google on Be Internet Awesome, a free digital citizenship curriculum that helps kids learn to be safe, more confident explorers online. Pear Deck educators worked with Google to create interactive presentations that accompany the lessons from Be Internet Awesome. Each one gives teachers a simple way to introduce a concept related to digital literacy. And because they’re editable, they’re easy to tailor to your students’ grade level. The basic version of Pear Deck is free, but my listeners can now get a complimentary 60-day trial of Pear Deck Premium with no credit card required. This will give you access to features like the teacher dashboard, personalized takeaways, and more. To learn more, head to peardeck.com/cultofpedagogy.

The Cult of Pedagogy Podcast is part of the Education Podcast Network. The EPN family now includes 27 different podcasts, and each one is focused on education. Check out all of the EPN podcasts at edupodcastnetwork.com.

Now here’s my conversation with Mark Wise about 5 ways to improve your rubrics.


GONZALEZ: Mark Wise, welcome to the Cult of Pedagogy podcast.

WISE: Thank you, Jennifer.

GONZALEZ: We are going to talk about rubrics. And so before we do, why don’t you just tell my listeners a little bit about who you are and what is your interest in rubrics?

WISE: Well who isn’t interested in rubrics, first of all?

GONZALEZ: Yeah.

WISE: Second of all, I’ve been an administrator for the last 20 years in West Windsor-Plainsboro New Jersey, and started out as the social studies supervisor and now I work with grades six through 12, new teachers, and really trying to help them in their professional development and help them grow as educators. So I work across disciplines now, which I really, really enjoy. And when it comes to rubrics it’s one of those things where I used them when I was a teacher, I experienced them obviously as a student, and I’ve seen them be utilized throughout my 20-year career and in ways that are I think always with the best intentions that teachers have that they’re going to have an impact on the student —

GONZALEZ: Yes.

WISE: — and really help clarify the desired learning. And it doesn’t always work out that way. So in working, especially with new teachers in helping them sort of try to see the forest for the trees, I’ve just seen some ways that, simple solutions that could help people move their rubrics from not being as effective as they think they are to being a little bit more effective for the goals that they have.

GONZALEZ: Right. And it was great that you reached out to me, because this is something I’ve been wanting to sort of tackle for a long time, and this gave me an opportunity to sort of do a deep dive with you. So we are going to just kind of go through these five different suggestions and sort of talk about the problems that each one addresses. And so the hope is that teachers listening will find at least one thing that can, they can look at their own rubrics and say, oh yeah, that thing right there, that’s something that I’m not doing and I could be, and it’s going to make my rubric better.

WISE: That would be awesome.

GONZALEZ: Yeah. So let’s just get started. No. 1 is measure what really matters. So talk about that a little bit.

WISE: Yeah, and this is one of those things, again, I see a lot of rubrics, and they have various criteria listed of what they want students to accomplish, but oftentimes they put in criteria that can only be counted or easy to see or to score in some way. And what ends up happening is it sometimes incentivizes students to focus on things that are less consequential, they’re more focused on desired formatting than the desired thinking.

GONZALEZ: Right.

WISE: And so they’ll do things like the introductory sentence or the grammar or how many citations and lose the whole impact that they were trying to get the student to do the project or the performance or the paper in the first place.

GONZALEZ: Right. And you know, I’m thinking about this, about myself as a teacher, and it’s sort of like, and I was an English teacher, so if I had my students and I wanted them to write an editorial, for example, I feel like I would have that stuff in there, these sort of persuasive stuff in there, but it would be scrambled in with a whole bunch of other stuff too, things about spelling and length and organization and things that are important to writing. But what you’re saying is that sometimes there’s so much of that other stuff that you sort of forget about the main reason that you’re having the student do the assignment in the first place.

WISE: Right, or at least the student has a hard time sort of seeing through it all and what’s really most important here. Is my goal to try and make an emotional connection through my piece? Is my goal to try to persuade or win an argument? What am I really trying to do here? And sometimes if there’s all this other stuff, it’s hard to weed through that and really be focused on what the impact I’m supposed to have by doing this performance or piece in the first place.

GONZALEZ: Right. So what we’re doing with some of these, three out of these five as we’ve actually got sort of a model before and after rubric that we’re showing people, and the one that we’re using for this one is students were required to create a model of the lunar phases, which is something required by many science standards, that they have to show the phases of the moon. So in the “before” version, that’s not been fixed, and this is an example of an ineffective rubric, what’s the problem in here? Describe what we’re seeing in here that’s kind of muddling it.

WISE: Well it looks like the teacher has five different criteria focused on creativity, accuracy, attractiveness, and then mechanics and timeliness in terms of was it turned in on time? And so a lot of these criteria that the student will be looking at in terms of whether they did the task well or not, doesn’t really have much to do with their understanding of lunar phases. And so I think in terms of four out of the five of these things, are really peripheral criteria that could support the student showing that they understand it or not, but really is not at the heart of their understanding. Whereas in the revised version that we put in here, when it comes to the model, accuracy, functionality and the scientific reasoning, it’s really all about the thinking, the accuracy, the completeness, and the sophistication of the students’ thinking is what really is trying to be elicited by doing this model, this lunar model in the first place.

GONZALEZ: Right, yeah. And this second version has a lot more language that sort of gets very granular about was this model done correctly? It specifies in there, for example, that the model shows or suggests the rotation of the Earth. It talks about does it actually describe the patterns versus just having them up there? Does it explain why the phases occur? So it’s really demanding a lot more of the student in terms of the science as opposed to yeah, did you get the science right but did you also use sparkly paper and spell everything correctly and turn it in on time?

WISE: Right. And I think, you know, again, that’s part of the power of the rubric is that we’re trying to be transparent with our learners, but we don’t want to be prescriptive either. We want to create an opportunity for them to follow a learning path that works best for them and where they are in terms of their own learning. But we want to be really clear, like hey, this is where you want to go.

GONZALEZ: Right.

WISE: And this is what you should be paying attention to when you’re working on this model and not so much on the aesthetics if aesthetics wasn’t really the goal of, in this case, for the lunar model it’s not or it shouldn’t be.

GONZALEZ: Right. It’s interesting too, because those other types of criteria, I think teachers put them in because they’re trying to just get everything in there that they would possibly assess, and it’s those other things that students look at and say, okay, well I can definitely do that. That’s the easy thing to just make it neat, or something like that. Whereas getting the science right is more challenging, so when you strip away all this other stuff, and they only have the science to focus on, it really does force everybody to concentrate on what matters.

WISE: Yeah, I think that’s true, and I think there’s a little bit of fear with some of this other stuff that it feels subjective, whereas did it have the proper heading or did you use the appropriate Times New Roman 12-point font?

GONZALEZ: Yeah.

WISE: You have it or you don’t.

GONZALEZ: Yeah, yeah.

WISE: Right? And therefore there’s no gray and the teacher could feel justified with, well this is why I scored you this way.

GONZALEZ: Yeah.

WISE: And the kid will complain, because I guess I didn’t have that. But again, it’s not really getting the focused attention on what really matters.

GONZALEZ: Right. So with this first tip, really, teachers listening should be thinking about, are there some things in my current big bloated rubrics that I could pull out, that really just aren’t important? Or maybe combine? There may be a few factors like in this first one we’ve got creativity, attractiveness, mechanics and timeliness. Those things could all be put into one tiny category that’s only worth a little bit so that it’s like yeah, all that stuff is important, but really what’s important is the model accurate? Does it show functionality and is the reasoning correct?

WISE: Right. And I’ve also seen teachers pull those kinds of things out as a checklist.

GONZALEZ: Ah, gotcha.

WISE: Do you have it or not?

GONZALEZ: Right.

WISE: It’s sort of, you know, whereas a rubric really talks about a continuum of performance and it describes that. If you have the 12-point font, it’s not like you get, you’re partially there with 11-point font.

GONZALEZ: Right.

WISE: You either have it or you don’t. It’s binary.

GONZALEZ: Yeah.

WISE: And so if things are yes or no kind of thing, have it or not, those are usually best in a checklist.

GONZALEZ: Okay.

WISE: Whereas a rubric really describes a range of performances, and so really think about are you including stuff in there that really matters in the rubric and really does speak to the need to describe a performance moving from naive to sophisticated? If not, you may want to just pull that out.

GONZALEZ: Yeah.

WISE: And have X-number of points, like do you have it or not?

GONZALEZ: Yes, okay. Gotcha. And that’s, I’m glad you mentioned the whole idea of a checklist, because some people listening are right now picturing a four-column, fully developed analytic rubric, and other people, especially if they have been following my work for a couple of years, might already be using what we call the single-point rubric, which only has one descriptor of criteria, and it doesn’t have the four different levels. And so really this principle could be applied either way, whether it’s that you are, it’s really what is the criteria that you’re describing? Is it a lot of different things or is it really the stuff that’s most important?

WISE: Exactly.

GONZALEZ: So then No. 2 is really kind of continuing the same conversation, which is weigh the criteria appropriately.

WISE: Yeah, so this is, again, sometimes we get stuck by thinking if we have a certain number of criteria they have to be weighed equally.

GONZALEZ: Right.

WISE: Just because I have five criteria, I have to have them each worth 20 percent.

GONZALEZ: Right.

WISE: Or I have four criteria, they each have to be worth 25 percent. And we get to decide. We have a lot of power as teachers, and one of those things is how much each one weighs, and it really depends on what you value or what you think the kid should be expected to do with this particular piece, and you want to sort of honor the work that the kids are doing with the weight of the different criteria. So to make sure that yes, this is really what’s most important and therefore it should be worth the largest percent, or these two criteria are the most important, they should be equal but worth the largest amount, maybe 50 percent.

GONZALEZ: Right.

WISE: We can manipulate the scoring or the weighing in many different ways, but we really want to signal through our weight of, this matters more than this.

GONZALEZ: Yeah.

WISE: And that could change over time. Your weight for certain things can change as they start to become more familiar with a skill or more accomplished with a concept. If you’re going to repeat these same criteria, you could change the weight over time. But just by adjusting it, it signals sort of your value of the different criteria for the kind of thinking that you want from the students.

GONZALEZ: Right. So the example that we used was we pulled that same lunar phases rubric where what we had originally done was we had the three areas — accuracy, functionality and reasoning — and we added one more category to that, assuming that the teacher wanted to give some points for just general mechanics, is it correct, can you read it, does it not have any mistakes in it? In the first example though, that teacher took that mechanics grade and made that 25 percent so that you’ve got the other three criteria and then that one, and each one is worth 25 percent of the grade, so you end up with a kid who might be rock solid on the science, not so much on the mechanics, they could end up with a C on the project.

WISE: Right, which again is not really indicative of the learning.

GONZALEZ: Right.

WISE: And it gives a false read.

GONZALEZ: Yes.

WISE: Conversely you could have some kids who rock that stuff and who don’t really get the thinking —

GONZALEZ: Right.

WISE: — and their grade could be inflated because they’re able to do all the checklist-y stuff well —

GONZALEZ: Exactly.

WISE: — and it somewhat disguises their lack of understanding.

GONZALEZ: Absolutely. So talk about how we fix that.

WISE: I think we try to in the second example that the model accuracy, functionality and reasoning, the three different criteria that we felt was most important, would be, were 30 percent each or a total of 90 percent. So mechanics or the stuff that the teacher was still interested in making sure that the craftsmanship and the grammar and that the neatness and all that kind of stuff that is important, but it doesn’t outweigh the other stuff that’s really much more important in terms of the student demonstrating their understanding.

GONZALEZ: Right. And so that was just assigned 10 percent of the grade.

WISE: Correct.

GONZALEZ: So it can hurt a little but not much and 90 percent of that score is really devoted all to the science. So it’s a simple shift, really. Teachers can take existing rubrics and just make changes to how they’re sort of assigning points or percentages.

WISE: Right, right.

GONZALEZ: Okay. So the third one is check your math.

WISE: Right.

GONZALEZ: What’s up with that one?

WISE: Well first of all I wanted to do a little shout out to the Beastie Boys “Check Your Head” but this is “Check Your Math.” And in this case, again, I see this all the time where typically on the four-point rubrics where the four is sort of reserved for your A-plus or exceeding expectation and the three is where the teacher expects most of the kids to be, that’s sort of the proficient range. And so what ends up happening though, if the teacher doesn’t adjust the scale, if a student was to score in the three range, or the meets expectation range for all the different criteria, by default three out of four is a 75. And so what ends up happening is either a kid gets penalized by this unfair grading system where they’re really being hurt by getting a C when they met expectations or the teacher looks at that and says, this kid did a great job. They got a C. I better change some of my scoring, and so they end up giving them more feedback on the four or exceeds expectations in order to get the grade that they want.

GONZALEZ: Yeah.

WISE: And so either way they either, kids get either unfair grading or dishonest feedback, and either one isn’t really great for the learner.

GONZALEZ: Right. So really No. 3 is more than anything just sort of a warning to, before you distribute this rubric and start using it with real students, it would be a good idea to run through some scenarios in your head and make sure that if you have assigned points to things or if you’ve made certain things equivalent to points later on that once that plays out into a grade that the level of the students’ mastery demonstration in the rubric actually translates to a grade that looks about the same philosophically, I guess, to what we would think of as a student who has mastered that. That could be done with assigning point ranges, right? I know that when I used a four-point rubric, that bottom level would be zero to 50, or even 60. It was a huge, huge point range in the bottom level, and then they would get increasingly smaller as I moved up to that top level to where somebody who had that four would be earning something in the 92 to 100 range, or something like that.

WISE: Yeah. I think it really, you know, and I talk with teachers all the time about this, that if you looked at one of your columns exclusively vertically and said, okay, let’s say a kid was performing exclusively in the three column and all those things. So in your head what is that grade? And you just have to make sure that whatever that grade is aligns to your scoring.

GONZALEZ: Yes.

WISE: So if in your head, wow a kid does all those things? That’s sort of a B-plus, A-minus. Well then your range should be somewhere in the eight to nine range or it should be, if you want to get funky with decimals, a 3.5 to 3.75, or something like that where if a kid was proficient in that vertical column that the grade would then match what that expectation was.

GONZALEZ: Right.

WISE: And that there’s not a disconnect. Take it down one, like let’s say the two, that sort of approaches. Well if a kid got a two out of four for all that stuff, that’s failure, that’s a 50.

GONZALEZ: Exactly, right. If that translated to straight-up points and percentages, then that would not work, yeah.

WISE: Yeah.

GONZALEZ: I mean, and I believe those point ranges should be communicated to students ahead of time with the rubric. That shouldn’t be a surprise later on.

WISE: Yeah.

GONZALEZ: And there are so many different rubric styles. I used to put point ranges on my rubrics so that they would know, because I think a lot of kids don’t know what you do with a three or a four, like what does that actually look like?

WISE: Right.

GONZALEZ: And in your post we’re going to go ahead and link to another post that I have that just is about how to actually convert rubric scores to grades, because I’ve gotten that question a lot from a lot of people. I think people are curious about other teacher’s processes, so I share the process that I recommend in that too.

WISE: Great.

GONZALEZ: Okay. So N. 4 is can-do rubrics versus can’t-do rubrics.

WISE: Clever word play.

GONZALEZ: Yeah.

WISE: And what that really is talking about is a lot of our rubrics, without us I think even realizing it, contains a lot of language around deficiency. Other than the fourth column or the exemplar column, everything else is sort of less than. Somewhat, mostly, lacking, insufficient, inadequate, and it’s all different degrees of what the kid isn’t doing. Versus, I use the example here of the swim chart, versus what they can do, in the affirmative, and just sort of talking about the great example is going from the tadpole to the seal, someone who could put their head in water and blow bubbles for 15 seconds to the swimmer who’s able to tread water for five minutes and swim the length of the pool using a variety of different strokes. I mean you would not say to the tadpole, you can’t swim.

GONZALEZ: Right.

WISE: Or this is what you can’t do.

GONZALEZ: Right.

WISE: But it’s really clear about hey, you’re on the progress, and on a progression, to eventually swim in the deep end without a lifeguard watching you or your parent worrying about you, and you could do it on your own. And so it’s really clear in terms of it’s a clear goal for the student of what they want to achieve and it also says to them it’s a step.

GONZALEZ: Yeah.

WISE: And you’re here right now, but it doesn’t mean that you’ll always be here.

GONZALEZ: Right.

WISE: And it talks about what they can do. And so one of the things I find with rubrics is that kids tend to get them twice  — once at the start of the project and then again at the end of the project when there’s a grade attached. And so at that point the feedback isn’t very helpful.

GONZALEZ: Yeah.

WISE: And so when there’s language that’s somewhat or mostly or lacking, it’s really confusing and fuzzy to kids. But if it’s written in what they’re able to do, they’re able to say, okay, well I’m here now.

GONZALEZ: Yes.

WISE: And they can then either self-assess or peer asses or with teacher, what do I need to do to move up one?

GONZALEZ: Yes.

WISE: Oh I see. This is what I have to do. I mean just like the new swimmer doesn’t ask the lifeguard every single time, what do I have to do to go from tadpole to seahorse? It’s clear. I know I need to do this. And so I’ve accomplished this, and now I can move on, and now I have to swim the width of the pool, I can now put my energies toward that. And I think that there’s rubrics out there that do that, and it’s really hard to do I think for teachers, because we know what it, we just sort of know what it doesn’t look like versus what sort of approaching or developing or beginning does look like, but it’s so helpful if we’re able to put that kind of language in the affirmative, and we’re able to use the rubric during the project so that they’re really able to see where they are and see what they need to do in order to improve.

GONZALEZ: Yes. And I, when I jumped in there for a second, it was just to sort of echo that, that anybody who’s doing any kind of a long-term project, that should become part of your practice of revisiting that rubric several times throughout. I mean I’m thinking the rubric really should just kind of be out and hanging around for the duration, because it’s a time to be checking and assessing where you are. So what we have here is two different examples, one that uses more of the deficit language, which is more typical for rubrics, and then another that uses the can-do language. And the expression “can-do” we’re actually pulling from the language community, people that typically work with, like English language learners, they have a whole set of rubrics called can-do descriptors that use this type of language. So what we’ve got here is just two rows of a rubric that is describing a research project that students have to do. So one of the criteria on this is that students should be posing a significant researchable question or actually questions. In the beginning stage, basically the one level or the lowest level, a typical rubric might say that the student fails to pose a researchable question on a local, regional, or global issue and/or doesn’t explain its significance. So what we’re saying is they didn’t pose a question, and they didn’t explain its significance. So what is the can-do version of that?

WISE: So the can-do version of that would, the student is able to pose a broad question on a local or regional issue with reference to the global community. So it’s not what the teacher’s expecting, right? It’s not this more sophisticated question that is able to explain the significance to the global community, that’s actually a researchable question, but it says like okay, right now it’s a broad question, but you’re able to do that, but what do you need to do to move it, to hone it, to be a researchable question?

GONZALEZ: Right.

WISE: To be a question that is worth digging into that you can try and surface some really interesting information or insight about as opposed to something that’s either too broad, that’s not very researchable or too narrow that you could Google it and then find out the answer in two seconds? So I think it talks about the, again, sort of the degrees of sophistication that the student is as a person who could develop a researchable question, and it gives a continuum of here’s where you might be right now or here’s where you’re starting, or here’s some examples, and then what does it look like as you start to refine it and become more sophisticated in developing these kinds of questions?

GONZALEZ: It’s funny, because I would imagine that someone who is trying to develop a can-do style rubric, it would take some work to, you’d have to be thinking about what does it look like when somebody is at the very beginning of the right track, you know?

WISE: Right. Or at the very beginning of what a sixth-grader would look like or the senior or whoever’s in front of you or the kinds of kids that you have. And I think that transitions us a little bit to the last thing, which is models, models, models and that I find that the biggest bang for your buck, if there’s one thing that if teachers could change it would be this, it’s to show models, it’s to show models of student work, of real work that’s out there, that ties to the exemplars and non-exemplars of the rubric.

GONZALEZ: Yeah.

WISE: That teachers do sometimes show models, but it tends to be just like, here’s the awesome kid from two years ago that I’m showing you that you guys will never get to, but here it is and everyone do this. But I think it’s super helpful to students to say, well here’s what an approaching looks like versus a meeting versus an exceeding versus a beginning. And let’s look at the pieces, and tell me, what does it mean to you guys? I would even say not tell them what they were —

GONZALEZ: Right.

WISE: — in terms of how you sort of put them into your own head in terms of the rubric, but here are these different models, let’s categorize them in terms of our little Goldilocks not-so-good, pretty good, really good, and then let’s have that conversation why, why did we put them into these piles? What seems to distinguish them from one another? And in that way, they’re starting to build the rubric themselves, and they’re starting to come up with the descriptors, because they’re seeing the actual models of this work and the impact that we want them to have. And the teacher could have already drafted the rubric, but now they say, they’re able to grab the language from the students that they came up with and put it into the rubric, and also what’s really powerful is then to link these examples to the rubric itself. So oftentimes, again, we perseverate about the language we use, distinctly, clearly, and when the kids experience it, they have rubric fatigue. They’ve gotten rubrics all day, all year, their whole career, and they all start to look the same.

GONZALEZ: Right.

WISE: And so while we are really so myopically focused on our language, for the kids, they all tend to bleed together. So if we could have visuals and real models of what we’re really talking about here, and are able to parse the differences between them, that stays with the student.

GONZALEZ: Yeah, yeah.

WISE: They’ll remember that, and they’ll internalize our expectations while they’re doing the project.

GONZALEZ: I’ve had, and I’m sure other teachers listening to this are thinking this too, had students grade samples and say, here is the rubric, and you can do a lot of different things, co-constructing the rubric is a fantastic thing to do. I’ve had a lot of success by saying, here’s a couple of models. I want you to work together and decide what grade you think this would have gotten according to this rubric, and for the first time, it actually gets them to pay attention to the language in the rubric in the first place.

WISE: Yeah. Again, I think, like you said, that and also doing it without the rubric and just having them —

GONZALEZ: Talking about the qualities of it.

WISE: Yeah, just like why do like this one more than this one?

GONZALEZ: Right.

WISE: What is it about it?

GONZALEZ: Yeah.

WISE: And then in effect, they’re pulling out the criteria and the descriptors, and that’s going to stay with them versus just handing them this 8.5 by 11 piece of paper in 10-point font that is going to sort of like, they’ll put it in their binder that they may or may not ever look at again until it goes back with the project and an 83 on it.

GONZALEZ: Right, exactly. You know, and what I’ve found too with those types of exercises, where the students are sort of evaluating these samples is to not use the extreme, to not use the obvious one and the obvious four, because those are just easier to call out. It’s the two’s and the three’s that make it a lot, especially the ones that have a little bit of this and a little bit of that.

WISE: Yep.

GONZALEZ: Where it’s harder to tell, and it would require some discussion and some debate and some actual text-based evidence where you’re pointing to certain things. Teachers I think need to realize that this is time-consuming to do, and it takes away from your delivery of content, but it is really valuable, it’s valuable work, really.

WISE: Right, and again, I talk about this with the teachers all the time. Our job is not to cover the content.

GONZALEZ: Yep.

WISE: Our job is to cause learning. And this causes the learning.

GONZALEZ: Yeah.

WISE: This is what we’re really supposed to be doing. So sometimes you need to go slow in order to move fast.

GONZALEZ: Yes.

WISE: And if you spend the time to sort of really get students to understand your expectation from the inside out, they’re going to be much more productive and much more effective in their thinking, and in ultimately in their performance because of this.

GONZALEZ: Right. And so I’m going to just quickly review these five points for people that are just listening as a quick overview. The first one was to measure what matters, make sure that the criteria on there is actually the thing you’re trying to teach and assess. No. 2 is weigh the criteria appropriately, assign the right amount of points or percentages to the things that are most important. Two is check your math, make sure that the final grade actually equates to the level of mastery that you have in mind. Four is can-do rubrics, not can’t-do rubrics. Try to get your language to be as affirming and describing what a student actually can do versus making it deficit language. And finally the fifth one is models, models, models. The more we can connect the language in our rubric to actual models where students can see it in action, the better they’re going to understand that criteria.

WISE: Excellent summary.

GONZALEZ: Thank you, thank you. I’m looking at the notes. Was there anything else that you kind of wanted to add before we wrap up?

WISE: Yeah, I think, again, teachers are rightfully moving toward student-centered learning right now where kids are really more in charge and owning their learning process, whether it’s engaging in debates or Socratic circles or complex problem-solving or coming up with their own experiments or researching things that really interest them and designing maker stuff. I mean it’s all happening, and it’s awesome stuff. But these kinds of experiences don’t result in a single correct answer or follow a certain formula, and therefore, they’re going to require a different kind of assessment tool, which is the rubric.

GONZALEZ: Right.

WISE: So we know we’re going to have to use these things, and so just by hopefully making some tweaks to what people already have, you’re going to be able to really make sure that the kids know what you want them to do in order to get there, allow you to really honestly and fairly assess them, and hopefully provide the learners themselves a way to make improvements during the process so that the rubric is really part of the experience and not just an audit at the end with a grade attached.

GONZALEZ: I think this is kind of like a little mini course that we’ve just given people in rubric creation and development, and so I think it’ll be really helpful to new and seasoned teachers alike who are finding that their students are not responding to their rubrics the way that they would hope. If teachers wanted to contact you with any questions about your process, I believe we were going to go ahead and give them your Twitter handle.

WISE: Sure. It’s @wisemancometh is my Twitter handle, and I’m happy to respond and engage and share accordingly.

GONZALEZ: Okay, all right, great. Thank you so much, Mark.

WISE: Thank you, Jenn.


For links to all the resources mentioned in this podcast, visit cultofpedagogy.com, click podcast, and choose episode 117. To get a weekly email from me about my newest blog posts, podcast episodes, and products, sign up for my mailing list at cultofpedagogy.com/subscribe. Thanks so much for listening, and have a great day.