Listen to my interview with Rupa Chandra Gupta (transcript):
In schools, we use more tech tools every year. We also have very little time to vet them for quality. Do the math and you have a formula for some tech choices that may not be serving our students as well, or as equitably, as they should be.
It’s easy to dismiss this as no big deal. So what if we occasionally adopt something that isn’t the very best choice? The answer to that depends on a couple of factors: Are we spending a lot of money on the tool? Is it going to replace other learning experiences? Will it be time-consuming to adopt? Are we expecting it to close gaps and provide remediation? If the answer to any of these is yes, then it would definitely be a big deal if our chosen tool didn’t actually do what we thought it did. It would be an even bigger deal if that tool ended up widening the very gaps we were trying to close.
This is not to say that schools are just going about their tech decisions all willy-nilly. Surely everyone is acting in good faith. But when all the tools seem ideal, when they all promise to solve some of our most persistent problems, it’s pretty hard to figure out which one to pick. What we need is a framework for making these decisions, a set of practices that can help us determine which tool is really going to deliver on its promises.
Rupa Chandra Gupta, founder and CEO of Sown to Grow, is hoping to contribute something to that framework. As a former school administrator and the head of an ed tech company, Gupta has been both a consumer and a producer; this has raised her awareness of the interplay between equity and technology. Now she wants to hold herself and her peers to a higher standard when it comes to designing tools that meet the needs of more students.
Although Gupta is a believer in technology’s potential to boost learning, she has learned that it can also accelerate our mistakes. “Technology amplifies whatever is happening,” she says. “If we’re widening a gap, it can be amplified by technology, and it happens faster, and it happens sometimes under the radar, because teachers and students might not be having every interaction in person anymore.”
When Tech Falls Short
The earliest seeds of this idea were planted when Gupta was working for a middle school that was undergoing a lot of significant change. As part of their transformation, the school adopted a comprehensive, personalized learning platform. “We invested a ton of time, weeks of professional development over the summers. We changed fundamentally the core of our instructional model—everybody rewrote their curriculum.”
At first, things seemed to be going fine, with students improving on benchmark assessments from fall to winter. “When we first pulled the numbers, if you looked at the average scores, we saw pretty significant growth of students overall. Great, right? Everyone’s excited.”
But a closer look at the numbers uncovered a different story. “I disaggregated the data,” Gupta explains, “and what we found was our students who were entering sixth grade on or above grade level were soaring. They were doing incredibly well in that self-directed learning environment. But our students who were coming in behind grade level were actually falling further behind. Not just moving forward at a slower pace or even staying flat; they were falling further behind.”
Despite their investment of time and money into the platform, Gupta and her colleagues decided to stop using it. “There might have been some room to tweak and kind of modify,” Gupta says, “but the disparity was so wide that it was clear that we had to just stop.”
Obviously, this decision was inconvenient, and it left Gupta with the feeling that there had to be a better way, a more deliberate, systematic approach to evaluating tech before diving in. The following six strategies are what she suggests.
6 Strategies for Deeply Assessing Tech
Whether you’re considering a new tool or wondering whether one that’s currently in use is really effective, these six strategies can help you make more informed decisions.
1. Use it Like a Student
Sign in as a student and go through all the core elements of a tool. Put yourself in the shoes of one of your higher performing students and one of your lower performing students. How does the tool respond when students make mistakes? Where are the challenges? How can you solve them?
2. Launch a Pilot Group
Although using a tool “as” a student can uncover problems, nothing works better than putting it in the hands of real students. Instead of launching a platform school-wide, take the time to pilot it first with students. Gather a diverse group for this—both high achievers and students who are likely to struggle, native English speakers and English learners, and students who come from varied cultural and socioeconomic backgrounds—then pay attention to differences in how they are using, enjoying, and experiencing a product. Do they understand how to navigate inside the platform? Is the language used by the tool accessible to them? These kinds of questions should be considered before any kind of school-wide implementation.
3. Look Closely at Data
Although a tool might be giving you good results on the surface, your numbers could look different from another angle, so be sure to look closely. “If you do get data from any tools that you’re using yourself or from other benchmark assessments,” Gupta says, “break down the results by different student populations. Look for unintentional widening of equity gaps.”
This scrutiny should also be applied to tools that might not be purely academic, like apps meant to increase parent involvement. “I’ve seen digital portfolio apps that are beautiful and easy for kids to take pictures of their work and send home to parents and all of that,” Gupta says. “But I wonder: Is this a way for parents who are already engaged to get more engaged? Or is it really speaking to parents who we’ve been trying to bring into the fold? If parents don’t have smartphones and computers at home, can they access this stuff? If there is a subset of folks who aren’t able to engage or access, it’s probably folks who we want to make sure we’re not leaving behind, right?”
4. Think About Why
Ask yourself critical questions about how and why something works to improve student learning. “How is this tool fundamentally changing something about teaching and learning?” Gupta says. “What is it about this that’s innovative or different? I think when you ask yourself those questions, you can think about how that’ll play out for different groups of students. Is this tool truly changing learning experiences, or is it just a worksheet in an online format?”
5. Ask About Impact
If you spend a few minutes on an ed tech company’s website, you’re likely to find statistics about the tool’s effectiveness. Gupta has noticed that these numbers are rarely disaggregated by different levels of learners. “There’s not nearly enough transparent information about this,” she says. “So I would put the burden on people like me who are building tools. Ask them about evidence of impact in working with different types of learners. Like, ‘tell me what the difference is between these different types of students I serve.’ And if you don’t know, how are you going to find out?”
6. Follow Your Gut
“Experienced educators have such an amazing sense for what’s going to work well for their students in their context,” Gupta says. “So trust your gut.” Listening to your gut can prompt you to take a closer look and follow through with the other steps listed above.
Does this mean you have to stop using a favorite tool? Not necessarily. “None of this is intended to suggest that teachers stop using things they like,” Gupta says. “It’s more like OK, this is making me nervous about X, Y, and Z. What scaffolds am I going to put into place? It’s meant to make sure that this thinking is a part of the protocol when you are testing new tools and ideas. Because if we can elevate it in the conversation, then I think it’s more likely that the whole system will adjust to make sure it’s elevated in importance, right?” ♦