AI Tutor or Human Tutor? A Decision Framework for UK Schools
A practical framework for UK schools comparing AI tutoring with human tutor marketplaces on cost, safeguarding, curriculum fit and impact.
Senior leaders and MAT data leads are under more pressure than ever to prove that every intervention pound works. That is why the choice between AI tutoring and human tutor marketplaces is not really a technology question; it is a procurement, safeguarding, curriculum, and impact question. The right answer depends on scale, subject need, staffing capacity, and how quickly you need measurable progress. For a broader look at how school leaders assess platforms, see our guide to the best online tutoring websites for UK schools and our practical framework for online tutoring for schools.
In 2026, the most effective schools are not asking whether online tutoring is good or bad. They are asking which model fits the problem in front of them, and what evidence they will use to judge success. If you are comparing fixed-cost, AI-powered programmes such as Third Space Learning’s Skye with marketplace options like MyTutor or Fleet Tutors, the key is to separate outcomes you can scale from outcomes that need human judgement. This guide gives you a decision framework you can use in procurement, in trust-wide planning, and in board reporting. For context on value, read about affordable tutoring for schools and school tutoring budgets.
1. The real decision: intervention design, not just vendor choice
Start with the pupil need, not the platform
The strongest tutoring decisions begin with a precise statement of need. For example, a Year 6 cohort struggling with arithmetic fluency does not need the same solution as a Year 11 group needing essay feedback or spoken-language rehearsal. AI tutoring can be highly effective when the intervention goal is repeated practice, diagnostic pacing, and consistent delivery at scale. Human tutors are often stronger when the goal requires discussion, adaptive explanation, motivation, or nuanced feedback across a wider range of subjects.
This is why schools should map interventions to curriculum gaps before asking for quotes. A maths catch-up programme, a GCSE English revision cohort, and an A-level chemistry support package are operationally different problems. If you need help defining those gaps, our guides on primary school maths intervention, GCSE intervention strategies, and A-level intervention are a useful starting point.
Think in terms of intervention mechanics
Every tutoring model has a mechanism of change. AI tutoring works by offering highly standardised micro-feedback, immediate response loops, and repeatable delivery. Human tutoring works through relationship, explanation, and on-the-fly adjustment. Schools often overestimate the importance of the tutor’s warmth and underestimate the importance of structured repetition, but both matter depending on the learner. In intervention planning, the best question is: what type of practice will move this group fastest, with the least operational burden?
That thinking aligns with broader procurement discipline. A good school purchase does not simply chase the lowest hourly rate or the newest technology. It assesses whether the delivery model reduces teacher workload, fits timetabling constraints, and produces evidence that can survive scrutiny from governors and trust leaders. If you are refining that process, our article on tutoring procurement shows how to build a stronger buying case.
Use the same lens you would for any high-stakes service
Schools already evaluate many services through the same framework: reliability, compliance, cost control, and measurable value. It helps to borrow a “fitness for purpose” mindset rather than a brand-led mindset. Whether you are selecting a tutoring programme or reviewing a supplier contract, the question is whether the service can be repeated safely and effectively at the scale your setting requires. For a useful analogy on evaluating offers against real value, see how to spot a deal that is actually good value and lessons from emerging tech deals.
2. AI tutoring vs human tutoring: what each model is actually best at
AI tutoring excels at consistency, scale, and fixed-cost planning
AI tutoring is increasingly attractive to school leaders because it solves several operational problems at once. A fixed annual fee can make budgeting simpler than paying hourly rates across multiple terms and cohorts. That matters in a landscape where intervention demand is high and timetables are crowded. Third Space Learning’s Skye is a strong example of this model: unlimited one-to-one maths tutoring for schools at a fixed annual price, with delivery designed for scale and predictability.
The practical advantage is not just cost. AI tutoring reduces dependency on individual tutor availability and can offer a consistent experience across classes, year groups, and even trust-wide rollouts. For MATs, that consistency is powerful because it allows central teams to standardise interventions and compare cohorts more cleanly. It also helps schools avoid the hidden variability that can arise when multiple freelance tutors deliver different explanations, different expectations, and different rhythms.
Human tutor marketplaces are better for breadth and complex adaptability
Human tutor marketplaces such as MyTutor and Fleet Tutors offer strengths that AI cannot fully replace. They are generally better when the subject range is broad, the need is bespoke, or the learner requires a strong interpersonal connection to stay engaged. A student preparing for a GCSE language oral, an A-level student refining evaluation in humanities, or a pupil with anxiety around schoolwork may benefit from a human tutor’s conversational flexibility. That flexibility can be especially valuable where the barrier to progress is confidence rather than content alone.
The trade-off is operational complexity. Tutor marketplaces can give schools access to a wider range of subjects and personalities, but quality control becomes more important. Leaders need to understand how tutors are vetted, what safeguarding checks are in place, how sessions are quality-assured, and how quickly a suitable tutor can be matched. For more on marketplace choices, our article on online tutoring websites for UK schools compares the broad options schools usually shortlist.
The decision is rarely either/or
In practice, many schools will use AI tutoring and human tutoring for different tiers of need. AI can power high-volume maths intervention for one population, while human tutors support targeted GCSE English, science, or language revision for another. That blended strategy often gives the best balance of scale, affordability, and pedagogical fit. If your school is asking where to deploy each model, begin by aligning the intervention to the curriculum outcome and the intensity of adult support needed.
3. Curriculum alignment: the hidden differentiator
Why alignment matters more than “personalisation” as a buzzword
Many vendors promise personalisation, but schools should ask a simpler question: does the tutoring content align to the curriculum your pupils are actually studying? Curriculum alignment is essential because interventions are most effective when they reinforce classroom learning rather than creating a parallel track. If a pupil is practising methods that do not match your scheme of work or exam board, you may get activity without attainment. That is why alignment should sit at the top of any assessment of AI tutoring or human tutor services.
Third Space Learning’s approach has particular appeal here because its maths programme is built around structured progression and school use cases. That makes it easier for leaders to map provision against year-group expectations and assessment points. For schools wanting to tighten curriculum sequencing, the guidance in our curriculum-linked intervention article is especially relevant.
Human tutors vary in alignment unless the school manages the brief tightly
Human tutors can be excellent, but curriculum alignment depends heavily on the quality of the brief. A tutor marketplace may give you a strong tutor, but if the school has not clearly specified exam board, topic sequence, assessment window, or expected learning outcome, the session can drift. That is manageable with good oversight, but it adds workload for the designated intervention lead or subject lead. For this reason, schools should see curriculum alignment as a shared responsibility between supplier and school rather than a default feature of any human tutor service.
When schools need support with planning, our article on personalised learning plans shows how to translate pupil need into a structured provision map. This matters because alignment is not just a content issue; it is a sequencing issue. The right tutor can still underperform if lessons arrive too early, too late, or without the prerequisite knowledge the pupil needs.
Use curriculum alignment as a procurement criterion
In practical procurement terms, ask each supplier to show how their service maps onto specific year groups, exam boards, and skill domains. Require sample lesson structures, progression logic, and reporting output. If the supplier cannot demonstrate how a pupil in Year 8 with fractions gaps differs from a pupil in Year 10 preparing for GCSE foundation maths, the model may be too generic for school use. That scrutiny will improve decision-making and reduce wasted spend.
4. Safeguarding and tutor quality: where the risks differ most
AI tutoring changes the safeguarding conversation
With AI tutoring, safeguarding is less about individual adult behaviour and more about data protection, content safety, and system design. Schools need to know how pupil data is stored, how prompts and responses are controlled, whether content is filtered, and what human oversight exists. Leaders should also understand whether the system logs interactions in ways that support review without creating unnecessary data exposure. This is not a side issue: in a school context, data governance is as important as pedagogy.
Pro tip: do not ask only “is it safe?” Ask “what is the failure mode, who monitors it, and how fast can the school intervene?” That question is especially important in AI tutoring procurement.
For a useful comparison with other high-compliance settings, our article on balancing compliance and AI workloads shows how organisations with sensitive data structure governance around new technology.
Human tutor marketplaces put tutor vetting front and centre
With human tutors, safeguarding risk shifts to the quality of vetting, training, and supervision. Enhanced DBS checks, identity verification, reference checks, and school liaison processes become core buying criteria. MyTutor, Fleet Tutors, and similar providers can work well for schools, but only if the governance pathway is clear. Senior leaders should ask how quickly concerns are escalated, who the designated safeguarding lead can contact, and what happens if a tutor is unavailable or under review.
It is also important to distinguish between claims and evidence. A marketplace may advertise “verified tutors,” but schools need to know what verified means in practice. Is it identity only, qualification only, or a more rigorous selection process? For a deeper lens on assurance and authenticity, our guide on validating genuine products before purchase is a surprisingly useful analogy for assessing whether provider claims are robust or merely polished.
Quality assurance should be observable, not assumed
Whether you choose AI or human delivery, schools should insist on evidence of quality assurance. That might include session observations, tutor feedback, lesson sampling, pupil attendance tracking, or cohort-level reporting. For human tutoring, tutor quality is more variable, which makes oversight more important. For AI tutoring, quality is more stable, but leaders need confidence that the system actually delivers the right level of challenge and support, not just an engaging interface.
| Decision factor | AI tutoring | Human tutor marketplace | Best use case |
|---|---|---|---|
| Scale | High and repeatable | Depends on tutor supply | Large cohorts needing consistent delivery |
| Curriculum alignment | Strong when programme is tightly structured | Varies by tutor and school brief | Exam-focused intervention with clear outcomes |
| Safeguarding focus | Data governance and content safety | DBS, vetting, supervision, escalation | Schools with strict compliance requirements |
| Cost model | Often fixed cost | Usually hourly or session-based | Budget predictability vs bespoke flexibility |
| Subject breadth | Usually narrower | Wide range across many subjects | Multi-subject or niche support needs |
5. Cost-effectiveness: compare total cost, not headline price
The cheapest-looking option is often not the cheapest to run
School leaders know that intervention cost is not just the fee on the invoice. It also includes admin time, matching time, cancellation risk, oversight, and the opportunity cost of incomplete delivery. A marketplace tutor may appear less expensive per hour, but if matching is slow or attendance is uneven, the true cost rises quickly. Fixed-cost AI tutoring can look more expensive at first glance, yet it may become more economical when used at scale across a whole year group or a trust-wide rollout.
This is why procurement should focus on cost-effectiveness rather than cost alone. The key metric is cost per expected unit of impact, not simply cost per session. To explore that thinking in more detail, our content on cost-effective tutoring and what tutoring really costs offers a practical lens for finance teams.
Fixed-cost models improve budget certainty
For MATs, budget certainty can be just as valuable as marginal price reductions. Third Space Learning’s fixed annual pricing for Skye gives leaders a way to plan ahead, avoid surprise spend, and expand provision without negotiating multiple hourly contracts. That is especially helpful where intervention demand changes mid-year and leaders need to add pupils without rebuilding supplier relationships. Predictable pricing also supports reporting to trustees and governors, because the financial case is easier to explain.
Human tutor marketplaces can still be cost-effective, particularly when the school needs a small number of carefully targeted sessions. However, costs can rise if the school is trying to support multiple cohorts, multiple subjects, or long-running catch-up programmes. Leaders should therefore model both the best-case and realistic usage scenario before signing. For a related perspective on buying decisions, see how to buy smart when the market is still catching its breath.
Build a cost model with three layers
When comparing suppliers, calculate: direct tuition cost, staff oversight cost, and expected impact cost. Direct tuition cost is the visible fee; oversight cost includes time spent matching, scheduling, reviewing, and chasing; impact cost asks how much progress each pound buys. A model that wins on one layer may lose on the others. Schools that adopt this three-layer approach tend to make more durable decisions and reduce procurement regret later in the year.
6. Measuring impact: what MAT data leads should insist on
Define success before delivery begins
The best interventions start with a measurable hypothesis. For example: “Year 7 pupils with KS2 arithmetic gaps will improve by at least one sub-skill band over ten weeks through structured AI tutoring.” That statement tells everyone what success looks like and how to measure it. It also prevents the common mistake of collecting attendance data without linking it to attainment. Impact measurement should be built into the contract, not added after the programme ends.
For schools wanting a sharper evidence model, our guide on measuring tutoring impact explains what to track and how to avoid misleading conclusions. It is also worth reading school intervention data to understand which metrics are actually decision-useful.
Use both quantitative and qualitative indicators
Quantitative measures are essential, but they should not stand alone. Track baseline and post-intervention scores, attendance, completion rates, and topic-level improvement. Then add qualitative evidence from teachers and pupils about confidence, independence, and readiness to learn. If a tutor programme improves confidence without moving attainment, you may need to refine the delivery model. If attainment improves but attendance is poor, the programme may not be sustainable.
MAT data leads should create a standard reporting template so comparisons are possible across schools. Without consistency, one school’s “successful intervention” may be another school’s incomplete data set. That is why strong governance matters. For more on structuring accountability, our piece on education data reporting can help teams build a better dashboard.
Don’t confuse engagement with impact
High engagement is welcome, but it is not proof of attainment. An AI tutoring session may feel smooth and efficient, while a human tutor may create lively discussion, but the real question is whether pupils are retaining knowledge and transferring it to classwork and assessments. Schools should therefore combine short-cycle checks, teacher feedback, and end-of-block assessments. That makes the impact case far stronger when reporting to senior leaders or the trust board.
Pro tip: if a provider cannot explain how they will help your school prove impact in 6 to 12 weeks, they are probably selling activity rather than intervention.
7. Procurement framework: seven questions every school should ask
1. What exact pupil group are we trying to move?
Start with cohort definition, current attainment, and the curriculum barrier. A vague brief like “catch-up maths” is too broad to procure against effectively. Specificity improves both vendor response quality and post-programme evaluation.
2. What level of human support is truly required?
If the main need is repeated practice and curriculum sequencing, AI tutoring may be the more efficient fit. If the need involves emotional reassurance, oral fluency, or a broad academic conversation, human tutoring may be more appropriate. The school should not pay for human time where the task is largely mechanical, nor should it force AI where relational trust is the key barrier.
3. What does tutor quality assurance look like?
For a human marketplace, ask about DBS checks, training, subject screening, and escalation routes. For AI, ask about content moderation, safeguarding controls, and pedagogical oversight. Either way, quality assurance must be visible and documented.
4. How will curriculum alignment be maintained?
Request curriculum maps, sample lesson plans, and an explanation of how the provider adapts to your exam board or scheme of work. This is one of the most important procurement checks because poor alignment can waste a whole term of intervention. To sharpen this area, see our guide on lesson plans for tutoring.
5. What is the reporting model?
Leaders need reporting that is useful to teachers, senior leaders, and governors. Ask for attendance, progression, and outcome data in a format you can act on quickly. A provider that only offers generic summaries will create more work than value.
6. What is the cancellation and continuity risk?
In marketplace models, tutor illness, schedule changes, and matching delays can disrupt continuity. AI tutoring can reduce this risk by offering more stable delivery. Schools should price that continuity into their decision.
7. Can we scale this trust-wide or only school-by-school?
MAT leaders should think beyond one school’s immediate need and ask whether the model can support standardised intervention across multiple settings. Fixed-cost AI tutoring often wins here because it is easier to roll out consistently across a trust. For a wider technology lens, our article on navigating the cloud cost landscape is a helpful reference point for scaling efficiently.
8. When AI tutoring is the better choice
Large-scale maths intervention
AI tutoring is often the strongest fit when the school needs scalable maths intervention across many pupils. Maths lends itself well to structured practice, immediate feedback, and repeated skill-building. This is exactly where Skye’s fixed-cost model can be compelling, especially if the school wants a predictable annual price and a programme that can be deployed without constant tutor matching. If your priority is to move a large group through specific maths gaps, AI can be the most operationally elegant option.
Budget-constrained provision with consistent demand
Some schools need a durable solution that does not fluctuate with tutor supply or seasonal demand. In that context, fixed-cost AI tutoring supports planning and avoids the stop-start effect that sometimes appears when hourly provision is cut back to protect budget. That makes it easier to sustain intervention over a full academic year. Schools can then focus on embedding practice rather than renegotiating supply.
Trust-wide standardisation and reporting
Where a MAT wants common reporting, common content, and common expectations, AI tutoring can reduce variation. That is particularly useful for central improvement teams that need to compare outcomes across schools. It also lowers the training burden on local staff, since the intervention model is usually less dependent on individual tutor judgement. For more examples of how schools can use consistent intervention design, see our guide to school intervention programmes.
9. When a human tutor marketplace is the better choice
Multi-subject or niche support
Human tutor marketplaces shine when schools need breadth. If you are supporting a mixed cohort across English, science, languages, and humanities, a marketplace can provide more specialist options. This is especially valuable at secondary level where exam boards, subject combinations, and learner needs vary widely. Fleet Tutors and MyTutor are often considered in these scenarios because they can meet more varied demand.
Confidence-building and complex learner profiles
Some pupils need more than subject content. They need someone who can build trust, slow down the pace, reframe failure, and keep them engaged over time. A strong human tutor can often do this better than any automated system. That matters for pupils who are anxious, disengaged, or returning to learning after a disruption.
High-stakes assessment where feedback depth matters
Essay-based subjects, oral assessments, and advanced reasoning tasks often benefit from human feedback that is nuanced and context-sensitive. A human tutor can interpret the subtleties of a student’s answer and respond in a way that feels collaborative. In these situations, schools may decide that the higher hourly cost is justified by the quality of support. For further reading on subject-specific intervention, our guide to GCSE tutoring is useful.
10. A practical decision model for school leaders
Use this rule of thumb
If the problem is large, repeatable, and curriculum-structured, start with AI tutoring. If the problem is complex, relational, or wide-ranging across subjects, start with a human tutor marketplace. If the school needs both, split the provision by function rather than trying to force one supplier to do everything. That approach usually delivers better value, cleaner reporting, and less operational stress.
Score each option across five criteria
Rate each provider out of five for scale, curriculum alignment, safeguarding, tutor quality, and cost-effectiveness. Then weight the criteria according to your actual priorities. For a primary maths catch-up strategy, scale and alignment may dominate. For a sixth form intervention package, tutor quality and subject depth may matter more.
Build a pilot before full rollout
Whenever possible, begin with a controlled pilot. Use a narrow cohort, a defined timeframe, and clear pre/post measures. This allows you to test attendance, satisfaction, and impact before committing to wider spend. Pilots reduce risk and generate the evidence needed to secure internal buy-in. For schools that want a structured launch, our article on tutoring for schools is a strong companion piece.
Conclusion: choose the model that matches the problem
The best tutoring strategy is not the most fashionable one; it is the one that fits the learning need, the safeguarding context, the curriculum, and the budget. AI tutoring offers schools a powerful way to scale maths intervention with fixed costs, consistent delivery, and easier reporting. Human tutor marketplaces bring flexibility, range, and relational depth, especially for broader subject support and high-stakes, feedback-heavy work. Senior leaders and MAT data leads should therefore treat this as a decision about system design, not supplier preference.
If you want the shortest possible version of the framework, it is this: choose AI when you need scale and standardisation; choose human tutoring when you need breadth and bespoke adaptation; and choose a blended model when your trust has both needs at once. For further reading on funding and procurement, see school tutoring funding, tutoring quality, and the best online tutoring websites for UK schools.
Frequently Asked Questions
Is AI tutoring safe for schools?
It can be, provided the provider has robust data protection, content moderation, and school-friendly safeguarding processes. Schools should ask how pupil data is handled, who monitors outputs, and what controls exist for inappropriate content or unexpected responses. AI tutoring is not automatically safer or riskier than human tuition; it simply shifts the nature of the risk.
Which is more cost-effective: AI tutoring or human tutoring?
That depends on scale and use case. AI tutoring is often more cost-effective for large, repeatable interventions because pricing can be fixed and delivery can be scaled without matching additional tutors. Human tutoring can be more cost-effective for smaller, highly targeted needs where the value of bespoke feedback outweighs the hourly cost.
Can schools use both AI and human tutors?
Yes, and many should. A blended model often works best because AI can handle structured, high-volume maths intervention while human tutors focus on subjects or pupils that need more flexibility. The most effective schools separate by need rather than trying to force one delivery model across every intervention.
How do we judge tutor quality in a marketplace model?
Look for enhanced DBS checks, qualification verification, subject screening, and clear escalation procedures. Ask for examples of how quality is monitored after placement, not just before it. Tutor quality should be evidenced through reporting, observation, and consistency over time.
What should MAT data leads track to prove impact?
Track baseline attainment, attendance, session completion, topic-level progress, and post-intervention outcomes. Add teacher judgement and pupil confidence measures so you can understand both academic and behavioural effects. The strongest reporting links the intervention directly to curriculum outcomes and exam readiness.
How important is curriculum alignment compared with price?
Very important. A cheaper programme that misses the curriculum is often more expensive in the long run because it consumes time without shifting attainment. Curriculum alignment should be a non-negotiable part of procurement, especially for exam-focused intervention.
Related Reading
- Online tutoring for schools - Learn how school leaders evaluate delivery models and safeguarding expectations.
- Tutoring procurement - A practical guide to buying tutoring services with confidence.
- Measuring tutoring impact - See how to build a stronger evidence base for intervention decisions.
- School intervention data - Improve reporting, comparison, and accountability across cohorts.
- Tutoring quality - Understand the standards that matter most when choosing a provider.
Related Topics
James Harrington
Senior Education Content Strategist
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
Turning Annual Reports into Action: Using Education Week–Style Research to Inform School Improvement
The Soundtrack to Study: Creating Your Perfect Study Playlist for Focus
Wordle and Language Learning: How Gaming Can Enhance Vocabulary Skills
From Smart Classrooms to Smarter Tutoring: How AI and Data Are Changing Personalized Learning
Thoughts on Transition: Navigating Career Change for Educators Without Losing Focus
From Our Network
Trending stories across our publication group