The $13 Billion Problem
American schools spend approximately $13 billion on educational technology annually. A 2019 report by the Johns Hopkins Institute for Education Policy, analyzing technology adoption in 1,200 schools, found that the majority of purchased edtech licenses were used by fewer than 20% of intended users after the first year โ and that many tools went entirely unused after initial professional development faded. The edtech graveyard โ licenses that expire without meaningful use, tablets that sit uncharged in carts, platforms that were announced at back-to-school meetings and forgotten by October โ is a real and expensive phenomenon.
This is not a technology problem. It is a procurement, implementation, and evaluation problem. Schools buy technology on the basis of impressive vendor demonstrations without rigorous needs assessment, pilot results, or implementation planning. The tools that succeed in school settings do so because their adoption was preceded by careful problem definition, matched with evidence of effectiveness, and supported with sustained professional development and system integration. This guide provides the framework for getting that right.
Start with the Problem, Not the Solution
The most common edtech procurement error is solution-first thinking: a vendor presents an impressive demo, administrators are excited by the features, and a purchase is made before anyone has clearly articulated what problem the tool is supposed to solve. The research on successful technology adoption across industries is consistent: technology that is adopted to solve a clearly defined, measurable problem produces dramatically better outcomes than technology adopted for its features or novelty.
Before evaluating any edtech tool, complete this problem definition exercise:
- What specific, measurable student outcome is currently below where it should be?
- What is the current cost of this gap (in remediation, grade retention, teacher time, or other quantifiable terms)?
- What does the research say are the most effective interventions for this specific gap?
- Does this technology implement one of those evidence-based interventions? How?
If you cannot answer all four questions before purchasing, you are not ready to purchase. The discipline of problem-first procurement filters out the majority of edtech that underperforms expectations.
True Cost of Ownership: The Hidden Multiplier
The licensing fee is the smallest part of what a school actually spends on educational technology. CoSN (Consortium for School Networking) research suggests that the true first-year cost of a significant edtech implementation is typically 3โ5x the licensing fee, once implementation costs are included:
- Professional development: Research by Learning Forward on effective PD suggests that sustainable adoption requires approximately 20โ30 hours of training per teacher for complex platforms. At average teacher hourly rates, 25 hours of PD for 50 teachers equals significant budget that rarely appears in the initial budget line.
- IT setup and ongoing support: Device management, single sign-on integration, data privacy configuration, ongoing troubleshooting, and system updates all require IT staff time. Many school districts underestimate IT support capacity constraints when evaluating new tools.
- Data privacy compliance review: Every tool that processes student data requires a data privacy review and a signed data processing agreement. Legal review, even at a minimal level, has real costs โ and skipping it has potentially much larger costs.
- Curriculum alignment work: Tools that require curriculum restructuring to use effectively require teacher time to align existing materials to new workflows. This opportunity cost is real even when it doesn't appear on a budget line.
- Opportunity cost: The time teachers spend learning and using a new tool is time not spent on other instructional activities. If the tool doesn't produce commensurate learning gains, that opportunity cost is a net loss.
Evaluating Evidence: What Counts
Not all evidence is equal. EdTech vendors produce extensive marketing materials that may include impressive-looking statistics without meeting basic standards for causal evidence. Understanding the hierarchy of evidence quality enables administrators to distinguish genuine efficacy data from well-packaged marketing.
What to Look For
Randomized Controlled Trials (RCTs): The gold standard โ students randomly assigned to use the tool or not, with outcomes compared. Few edtech tools have RCT evidence; those that do are exceptional. Quasi-experimental designs (comparing matched schools or students) are the next best โ less rigorous but still meaningful if properly designed. Pre-post comparisons within a single school (before tool, after tool) are weak evidence because many other things changed simultaneously. Case studies and testimonials are marketing, not evidence. Internal data provided by the vendor is inherently suspect โ look for independently conducted or published research.
For a practical shortcut: EdReports.org (curriculum), Common Sense Education (app reviews with research quality ratings), and the What Works Clearinghouse (rigorous edtech efficacy database maintained by the U.S. Department of Education) all provide independent evaluations that are free and publicly accessible.
Pilot Program Design: Testing Before Buying at Scale
A well-designed pilot generates the data you need to make a scale decision before committing full budget. Research on implementation science suggests that pilots should:
- Run for at least 8โ12 weeks โ shorter pilots capture initial engagement effects, not sustained learning outcomes
- Include a comparison condition โ at minimum, track the same metrics in a comparable group not using the tool
- Track leading indicators from week 2 โ teacher adoption rate, student time-on-task, and formative assessment trends are measurable quickly and predict eventual summative outcomes
- Include teacher experience data โ surveys and focus groups at 4 and 8 weeks capture implementation fidelity and teacher-experienced barriers
- Pre-specify success criteria โ define in advance what results would lead to scale-up vs. discontinuation. This prevents confirmation bias from driving the scale decision.
Pilots that include 5โ10% of the eventual target population are large enough to generate meaningful data while containing the cost of a potential failed adoption. For a district of 50 schools, a 3โ5 school pilot with full implementation support is the appropriate scale.
Vendor Red Flags
Experience in edtech procurement has generated a reliable set of vendor behaviors that predict poor product quality or implementation support:
- No data privacy policy or vague commitments to FERPA compliance without specifics
- Efficacy evidence that is only internal data โ no independent research, no published studies
- Inability to specify what evidence-based instructional approach the tool implements
- Professional development that consists of "getting started" training only โ without ongoing implementation support
- Contracts that lock in multi-year commitments before a pilot
- Support staff who cannot answer specific questions about data handling, security architecture, or FERPA compliance
Presenting ROI to School Boards
School boards are responsible for public funds and face skeptical constituents who see headlines about technology expenditures without visible results. Effective board presentations lead with community values (student outcomes, fiscal responsibility), present the problem being solved before the solution being proposed, and quantify both the current cost of the problem and the projected cost-effectiveness of the intervention.
A strong ROI presentation structure: Current state (specific, quantified outcome gap) โ Evidence base (what research shows works for this gap) โ Tool fit (how this specific tool implements evidence-based practices) โ Full cost (TCO, not just license fee) โ Pilot evidence (if available) โ Success metrics and accountability (what you'll measure, when, and what happens if targets aren't met) โ Scale plan and exit strategy.
The exit strategy is often omitted and always important. Boards respond positively to procurement processes that include clear criteria for discontinuation โ it demonstrates fiscal responsibility and prevents the sunk-cost trap that keeps underperforming tools in place long after evidence of failure accumulates.
EdTech ROI: Administrator's Checklist
- Define the problem before evaluating the solution โ if you can't articulate the specific measurable gap the tool addresses, you're not ready to purchase it.
- Calculate full TCO, not just license fees โ professional development, IT support, and implementation time typically cost 2โ4x the annual license. Budget for reality.
- Require independent efficacy evidence โ vendor-provided data is marketing. What Works Clearinghouse and EdReports.org provide free independent reviews.
- Design a 8โ12 week pilot with pre-specified success criteria โ and honor the criteria. If the tool doesn't meet them, discontinue it regardless of sunk cost.
- Include an exit strategy in every procurement decision โ boards and communities trust administrators who plan for failure as well as success.
Ready to see the difference? Try Koydo free today โ