Skip to main content
| Problem | Summary | |---------|---------| | Time-based, not mastery-based | Education advances students by age, not competence | | No closed loop | Edtech can’t prove outcomes, so it can’t improve them | | Fragmented ecosystem | Every app is a silo with its own data model | | No shared metrics | No common language for effort, progress, or efficiency | | Wrong incentives | Engagement is rewarded, even when it hurts learning | | Invisible failure | Learning problems surface too late to fix | | Motivation ignored | The biggest bottleneck is treated as an afterthought | | Developer tax | Every team rebuilds the same infrastructure from scratch |
Education is one of the largest software markets on earth, yet most products cannot reliably answer a simple question: did learning happen, and did it happen efficiently? Schools, families, and developers are stuck optimizing proxies like usage, completion, and seat time because the ecosystem lacks shared data and shared measurement. Timeback exists because the core failures are structural. Education is organized around time. Edtech is organized around isolated apps. And learning is governed by constraints most products don’t instrument or respect.

Measuring time, not mastery

Traditional schooling measures progress by calendars, attendance, and age rather than demonstrated competence. Students advance through grades with gaps, and those gaps compound silently until “grade level” becomes a label rather than a description of what a student can actually do. This “social promotion” model makes it nearly impossible to diagnose why a student struggles. Is it the content? The instruction? Missing prerequisites? The system can’t tell. Instruction quality varies widely, outcomes remain opaque, and students advance based on time served rather than knowledge gained.

Can’t improve what you can’t prove

Most education products can show activity (minutes spent, clicks, lessons completed) but cannot prove causal impact on durable learning. Even when test scores are available, they’re often disconnected from what happened inside the product. Iteration is slow. Arguments about efficacy are endless.
“Timeback has the only closed data loop on learning that is in existence, and if students aren’t learning, it is our fault.”Joe Liemandt, Founder of Timeback and Alpha School
The industry standard is the opposite: open loops everywhere. Everyone hopes learning happened but few systems can verify it. Edtech companies end up optimizing for engagement, retention, and session length because those are the metrics they can actually measure.
What gets trackedWhat actually matters
Minutes spentKnowledge retained
Lessons completedSkills transferred
Daily streaksMastery demonstrated
Click-through rateTest score improvement

Every app is a silo

Schools run dozens of tools across rostering, content, assessment, tutoring, analytics, and motivation. But apps rarely share a coherent underlying data model. Each new product must reinvent the same infrastructure from scratch:
  • Rostering and identity management to sync student and class data
  • Authentication and access control for logins and permissions
  • Content storage and delivery for lessons and assessments
  • Progress and mastery tracking to define what “done” and “learned” mean
  • Analytics and event logging to capture what happens in the app
  • Standardized test integration to connect to meaningful outcomes
Each app becomes its own “mini platform.” Schools become the integration layer, manually reconciling data across dashboards that disagree with each other. Diagram showing a Student connecting to App A, App B, and App C, each with their own Dashboard, with no shared record between them Students use multiple tools that cannot share information. There is no shared record of learning.

No language for effort, progress, or efficiency

Even when edtech apps work, schools can’t compare them. One product reports points, another reports levels, another reports completion percent, another reports time spent. None of these are interoperable, and most aren’t tied to externally verifiable outcomes. Developers should be able to answer basic questions in a mature platform ecosystem. Did 30 minutes in Tool A produce more learning than 30 minutes in Tool B? Which content sequences produce faster mastery for which students? Where are students stuck because of missing prerequisites versus confusion versus disengagement? Today, these questions can’t be answered. Parents see a jumble of incompatible dashboards. Teachers can’t build a coherent picture of student performance. Administrators can’t make informed decisions about which products deserve investment.

Incentives reward engagement, even when it conflicts with learning

Many products are built to maximize retention metrics: time in app, daily streaks, content consumption. But time spent is not the same as learning. Systems that reward “doing school” can accidentally reward low-effort behaviors that look productive. Gamification often rewards completion rather than mastery. Students learn to optimize for points with minimal cognitive effort: clicking through explanations, guessing until correct, avoiding challenging content. Systems report high engagement while actual learning doesn’t happen.
“The difference between Timeback and most edtech: most apps lower mastery standards to gamify; we motivate students to power through mastery and high standards.”Andy Montgomery, Head of Academics at Timeback and Alpha School
Products that feel good and look busy win procurement cycles. Products that are efficient and rigorous are harder to explain using today’s dashboards.

Learning failure is often invisible

Students can appear successful in a product while learning very little that transfers or persists. High in-app accuracy can be driven by pattern matching, memorization of specific items, or shallow strategies that collapse under variation. Common failure patterns:
  • Students advance based on completion, not understanding
  • New content is layered on top of gaps, causing compounding failure
  • Correct answers in-app don’t transfer to real assessment results
  • Learners infer wrong rules from poorly designed instruction
  • Systems report success while actual learning doesn’t happen
When systems don’t instrument for transfer, prerequisites, and cognitive load, failure surfaces late: on real assessments, in later units, or in the next grade. Invisible failure is the most damaging failure mode in scalable learning systems.
Learning science calls this the “transfer problem”: success in the training environment doesn’t guarantee success elsewhere. Apps that don’t test for transfer can’t detect this failure.

Motivation is the bottleneck

Even the best instructional design fails if students won’t engage consistently. The industry often treats motivation as UI polish (badges, confetti, streaks) rather than as a core product problem with measurable consequences.
“90% of the solution is motivating the kid. It’s not the edtech.”Joe Liemandt, Founder of Timeback and Alpha School
Traditional systems don’t give students a compelling reason to try. There is no meaningful reward for mastery, no “time back” for finishing early, no visible proof that effort leads to results. The standard motivational model is “work hard for 12 years, then 4 more, then a job.” No adult would accept that. Yet we expect children to. Effective motivation requires designing systems where effort leads to visible outcomes. Time reclaimed. Skills demonstrated. Goals achieved. Surface gamification doesn’t cut it.

Developers pay the infrastructure tax

For builders, fragmentation and lack of standards creates a compounding tax:
  • Rebuilding primitives: rostering, identity, permissions, observability
  • Guessing at data models for courses, content, and results
  • No way to validate impact, so iteration is slow and proof is expensive
  • No reliable feedback loop to tell you what’s actually improving learning
In other software categories, platforms reduce this tax. In education, the lack of shared standards and outcome-linked measurement means every serious team ends up trying to become a platform, whether they want to or not. Edtech apps are expensive to build, hard to measure, and don’t work together. Most fail to move the needle on actual learning. Not because the teams lack talent, but because the infrastructure to build effective, measurable, interoperable learning products does not exist.
This page describes the problems Timeback is built around. Next, see how the platform approaches standards, measurement, and interoperability.