TL;DR: The 8 structural problems in edtech
TL;DR: The 8 structural problems in edtech
| Problem | Summary | |---------|---------| | Time-based, not mastery-based | Education
advances students by age, not competence | | No closed loop | Edtech can’t prove outcomes,
so it can’t improve them | | Fragmented ecosystem | Every app is a silo with its own data
model | | No shared metrics | No common language for effort, progress, or efficiency | |
Wrong incentives | Engagement is rewarded, even when it hurts learning | | Invisible
failure | Learning problems surface too late to fix | | Motivation ignored | The biggest
bottleneck is treated as an afterthought | | Developer tax | Every team rebuilds the same
infrastructure from scratch |
Measuring time, not mastery
Traditional schooling measures progress by calendars, attendance, and age rather than demonstrated competence. Students advance through grades with gaps, and those gaps compound silently until “grade level” becomes a label rather than a description of what a student can actually do. This “social promotion” model makes it nearly impossible to diagnose why a student struggles. Is it the content? The instruction? Missing prerequisites? The system can’t tell. Instruction quality varies widely, outcomes remain opaque, and students advance based on time served rather than knowledge gained.Can’t improve what you can’t prove
Most education products can show activity (minutes spent, clicks, lessons completed) but cannot prove causal impact on durable learning. Even when test scores are available, they’re often disconnected from what happened inside the product. Iteration is slow. Arguments about efficacy are endless.“Timeback has the only closed data loop on learning that is in existence, and if students aren’t
learning, it is our fault.”Joe Liemandt, Founder of Timeback and Alpha School
| What gets tracked | What actually matters |
|---|---|
| Minutes spent | Knowledge retained |
| Lessons completed | Skills transferred |
| Daily streaks | Mastery demonstrated |
| Click-through rate | Test score improvement |
Every app is a silo
Schools run dozens of tools across rostering, content, assessment, tutoring, analytics, and motivation. But apps rarely share a coherent underlying data model. Each new product must reinvent the same infrastructure from scratch:- Rostering and identity management to sync student and class data
- Authentication and access control for logins and permissions
- Content storage and delivery for lessons and assessments
- Progress and mastery tracking to define what “done” and “learned” mean
- Analytics and event logging to capture what happens in the app
- Standardized test integration to connect to meaningful outcomes

No language for effort, progress, or efficiency
Even when edtech apps work, schools can’t compare them. One product reports points, another reports levels, another reports completion percent, another reports time spent. None of these are interoperable, and most aren’t tied to externally verifiable outcomes. Developers should be able to answer basic questions in a mature platform ecosystem. Did 30 minutes in Tool A produce more learning than 30 minutes in Tool B? Which content sequences produce faster mastery for which students? Where are students stuck because of missing prerequisites versus confusion versus disengagement? Today, these questions can’t be answered. Parents see a jumble of incompatible dashboards. Teachers can’t build a coherent picture of student performance. Administrators can’t make informed decisions about which products deserve investment.Incentives reward engagement, even when it conflicts with learning
Many products are built to maximize retention metrics: time in app, daily streaks, content consumption. But time spent is not the same as learning. Systems that reward “doing school” can accidentally reward low-effort behaviors that look productive. Gamification often rewards completion rather than mastery. Students learn to optimize for points with minimal cognitive effort: clicking through explanations, guessing until correct, avoiding challenging content. Systems report high engagement while actual learning doesn’t happen.“The difference between Timeback and most edtech: most apps lower mastery standards to gamify;
we motivate students to power through mastery and high standards.”Andy Montgomery, Head of Academics at Timeback and Alpha School
Learning failure is often invisible
Students can appear successful in a product while learning very little that transfers or persists. High in-app accuracy can be driven by pattern matching, memorization of specific items, or shallow strategies that collapse under variation. Common failure patterns:- Students advance based on completion, not understanding
- New content is layered on top of gaps, causing compounding failure
- Correct answers in-app don’t transfer to real assessment results
- Learners infer wrong rules from poorly designed instruction
- Systems report success while actual learning doesn’t happen
Motivation is the bottleneck
Even the best instructional design fails if students won’t engage consistently. The industry often treats motivation as UI polish (badges, confetti, streaks) rather than as a core product problem with measurable consequences.“90% of the solution is motivating the kid. It’s not the edtech.”Joe Liemandt, Founder of Timeback and Alpha School
Developers pay the infrastructure tax
For builders, fragmentation and lack of standards creates a compounding tax:- Rebuilding primitives: rostering, identity, permissions, observability
- Guessing at data models for courses, content, and results
- No way to validate impact, so iteration is slow and proof is expensive
- No reliable feedback loop to tell you what’s actually improving learning
This page describes the problems Timeback is built around. Next, see how the platform approaches
standards, measurement, and interoperability.
