Peer Assessment at “Planet Scale” – the Problem
Daphne Koller had a problem. As part of the Stanford University Computer Science Department, she had been teaching various aspects of computer science to a small number of very talented students for over 15 years. Now she was taking a leap into the unknown, as one of the pioneers in the MOOC (massive open online course) movement that is coursing through the Higher Education establishment. Instead of teaching, say 50 Stanford undergraduates in a course relying primarily on face-to-face instruction, she was about to teach a course to tens of thousands of students across the globe – where she had no idea how much knowledge each student came into the course with.
How on earth would she assess what her students were learning? Sure, multiple choice questions would work for testing a basic level of knowledge acquisition and those could scale up to be used by large numbers of students, but as any educator knows, that’s far from enough to get a more nuanced understanding of your students’ deeper understanding of concepts. How could she scale up qualitative assessments in a reliable way, when you’re working on a “planet scale”?
The right approach needed to meet several criteria: it should (1) provide highly reliable/accurate assessments, (2) allocate a balanced and limited workload across students, (3) be scalable to class sizes of tens or hundreds of thousands of students, and (4) apply broadly to a diverse collection of problem settings.
An Elegant Solution
Dr. Koller and her colleagues hit upon a simple but elegant solution to this problem. Why not get the students themselves to assess each other, and make the assessments part of how they learn the materials in the class? That way, the students would offload one of the harder to scale aspects of teaching large groups of students at a time. Tom Sawyer (who got his friends to paint his fence for him and even pay him for the “privilege” of doing so) would have been proud!
Things didn’t work great right off the bat, but as they kept experimenting and improving the approach, they found that with the right level of support provided to the students, students were well able to assess each other, AND they learned important skills while doing so. Since those early days three years ago, this approach to “peer assessment” has been used successfully in more than a 100 online classes so far, with millions of students, in all kinds of subjects.
Inspiration for SkillStore
I remember sitting in Dr. Koller’s office back in 2012, talking to her about what she and her colleagues were learning from their early experiments in peer assessment. That was an “A-ha” experience for me. Back then, I was leading Portmont College (now Mount Saint Mary’s University Online), a non-profit online college aimed at providing low-income Americans access to quality college degrees. I realized then that this approach to peer assessment could apply to a whole range of subject matter – from college courses to employee training. And it held the key to enabling us to train a much larger set of learners at a time for a lower cost than what was being done today.
Fast forward three years – we applied these lessons in large-scale peer assessment to develop our approach to peer-to-peer feedback, a core part of SkillStore’s learning model.
Want to learn more about SkillStore? Request a demo below to see how SkillStore can improve management and leadership programs at your company.