
EDUCATION
Smart Grading
Role: Product Designer
Company: Quizlet
Year: 2020
Background
Written questions were graded too strictly with high override rates adding friction to users while studying. There were three types of grading: unsure grading: users can choose whether an answer is correct vs incorrect when the grader is unsure; require one answer only: necessary to input one part of the answer that has multiple answers (separated by slashes, commas, or semicolons); and override: users can override any written question. The rigid system failed to catch typos, grammar slip-ups, or minor formatting differences friction that disrupted study flow.
Data Highlighted
42% of short-answer submissions were overridden by users
Typos and near-synonyms were commonly graded incorrect
Numeric thresholds (e.g., 85%) were mistakenly flagged as letters
SOLUTION
AI‑Enhanced, Flexible Grading
To address the rigidity of Quizlet’s original short-answer grading system, I designed a smarter, AI-powered solution that interprets user intent and reduces unnecessary friction during study sessions. The new system incorporated machine learning to recognize typos, semantic similarity (like synonyms), and numerical patterns, allowing for partial credit and more forgiving grading. It mimicked how a teacher might assess a response looking beyond exact matches to evaluate meaning while still giving users control to adjust grading strictness through an in-app toggle.
By layering in intelligent defaults with transparency (e.g., a “Smart-graded” label), the design improved trust and reduced the need for manual overrides. A/B testing confirmed its impact: override rates dropped significantly across both mobile and web, while study completion and retention metrics rose. This solution balanced the benefits of automation with user agency, ultimately creating a more personalized, equitable learning experience.
Study Experience
The redesigned study experience became more personalized and adaptive through integration of AI-powered grading. Learners received immediate, more accurate feedback that accounted for typos, phrasing variations, and partial correctness helping maintain momentum and confidence during study sessions. The system supported both short- and long-answer questions, using machine learning to evaluate meaning and structure beyond exact matches.
We A/B tested flexible grading across written question types and added a subtle “Smart-graded” label so users would know when intelligent grading was applied, with the option to opt out. Results showed a clear positive impact: the overall override rate dropped by 8% on mobile and 40% on web, with even stronger improvements when isolating questions eligible for the smart grading service. The added flexibility and transparency helped drive more completed study sessions, repeat engagement, and better outcomes.
Results + Learnings
53% Overrides drop - overall override rates dropped significantly.
+3.5% Increase - number terms mastered.
+4% Increase - user reach end screen (round end).
Increase study session - over the course of the experiment, users initiated an additional study sessions.
Low opt-out rate - Indicates users find grading acceptable.
PERSONALIZATION - Tailor grading to individual needs subject matter, learner style, exam format
FLEXIBILITY - empowered learners to choose the level of grading strictness that best supports their learning, while still benefiting from intelligent defaults.
GRADING SETTINGS - simple, transparent system paired with user control enabled learners to focus on mastering content rather than battling grading rules, ultimately creating a more supportive and motivating study experience.