Author: Renee Ho
Oh, the excitement of a new school year!
This academic year is particularly exciting for the world of community colleges and EdTech: Robin Hood Foundation’s “College Success” Prize has selected three finalists and they have just begun implementing their products with students at the City University of New York’s (CUNY) community colleges.
Here’s the idea in a nutshell (full version here):
The Robin Hood Foundation is running a prize competition to spur technological innovation that can help community college students graduate. Three finalists were selected and in August 2015, their products were introduced to qualifying CUNY students to be tested in a three year RCT. Outcomes measured for success include full-time persistence after one year and associate’s degree completion after two and three years.
As someone who both cares about poverty and is a total nerd, this idea is really cool. I also really hope it works—meaning that I hope the technologies are able to increase student graduation rates. (Full disclosure: I also worked on designing this prize competition).
There has never been a prize competition like this before where the prize’s defining characteristic is a social science experiment— an RCT with three experimental groups (Beyond 12, Education Advisory Board, and Kinvolved) and a control group. It makes sense for Robin Hood, a foundation that combines innovation, risk-taking, and a focus on metrics in what it calls “Relentless Monetization“.
What has always struck me is that technology development and RCTs can be polar opposite. Technology development requires rapid iteration and adaptation. Prototypes are put in the hands of users as soon as possible to see how they like and interact with the product’s core value proposition and specific features. On the other hand, RCTs require products to be held stable from when they are introduced to when the experiment in finished. In the prize competition’s case that’s three years.
According to some EdTech-types that I spoke to, three years is longer than the life span of many start-ups in the space. I wonder if there is a way in which the industry can determine earlier if their products are achieving—or at least on-track to achieve— the desired impact.
Recognizing that there is an inherent tension between technology development and running an RCT, the prize rules set a parameter that reads,
Naturally this parameter favored applicants that have an existing minimal viable product even though the competition was open to everyone, including applicants with a product only in concept phase. There is nothing wrong with this but then the prize becomes more of a proof-of-concept test (for the particular CUNY environment) and not a tool to spur innovation and address a missing market supply.
The question remains: how do we test less developed products and ideas in a meaningful way? How do we really spur that illusive buzzword, “innovation”?
Wearing my hat at Feedback Labs, I’m naturally inclined to say, “Well, why don’t we put the products in front of CUNY students, get their feedback and observe their behavior?” Could this kind of data still be considered rigorous even if they are not collected through a three-year RCT? Could this data meaningfully correlate with more longitudinal impact measures?
Perhaps this kind of experiment—to understand the relationship between “beneficiary” feedback and impact—would be another space for funders to explore.