Skip to main content
The Teacher Action Plan API is part of the Personalization Layer. It transforms evaluation outputs into reteach strategies, extension content, and class insights, enabling teachers to respond quickly and effectively.

Core Capabilities

  • Grouped Insights
    Automatically cluster students by common misconceptions for efficient reteaching.
  • Targeted Reteach Strategies
    Provide instructional approaches aligned to curriculum competencies and learning gaps.
  • Extension Content
    Suggest advanced materials and challenges for high-performing learners.
  • Class Segmentation
    Enable teachers to differentiate instruction with clear grouping data.
  • Curriculum Alignment
    All recommendations mapped to NCERT/NEP 2020 standards.

Supported Inputs

  • ✅ Exam Evaluation results (class-level errors, rubric data, question breakdowns)
  • ✅ Assignment Evaluation results (daily misconceptions, performance clusters)

Stakeholder Value

  • Teachers → Actionable reteach strategies and group insights, reducing planning time.
  • Students → Faster remediation and differentiated instruction based on ability.
  • Leaders → Visibility into teaching effectiveness and group-level remediation patterns.
  • Parents → Indirect benefit through targeted reteach cycles that improve learning outcomes.

Ecosystem Integration

  • Upstream: Exam & Assignment Evaluation Suites feed class-level data.
  • Core: API generates teacher plans with groupings, strategies, and extension pathways.
  • Downstream: Plans flow into AI Studio to generate lesson plans, worksheets, and reteach materials, and into ClassTrack to monitor teaching effectiveness.

Next Step

Next → See the Workflow to understand how class-level data is transformed into reteach strategies and group insights.

FAQ

Teacher Action Plans package evaluation outputs into reteach strategies, group insights, and ready-to-use lesson materials. Group insights highlight common needs across the class based on transparent evidence such as scores, feedback JSON, and annotated copies, so teachers can target instruction efficiently.
Yes. The Evaluation Layer ingests typed text, scanned handwriting, diagrams, audio, and video, and its results flow directly into Teacher Action Plans so the recommendations reflect real classroom work, not just typed responses.
You can embed a white‑label UI for teacher‑ready plans or call the Action Plan APIs for deeper workflow control. Structured outputs and webhooks make it straightforward to publish plans into existing LMS/ERP flows and dashboards while preserving your brand.
Upstream evaluation and results APIs provide explainable outputs such as section scores, step‑wise marking, feedback JSON, and annotated copies. Teacher Action Plans consume these signals to produce grouped reteach strategies and materials, giving you transparent, exportable evidence for governance and reporting.
Yes. A human‑in‑the‑loop workflow lets teachers or reviewers confirm, edit, or override recommendations before publishing, with auditability so expert judgment remains the final authority.
Plans and reports can be operated in a CBSE/ICSE/GDPR aligned manner with data minimization, role‑based access, and audit trails. Transparent outputs and approval flows help satisfy inspections and policy requirements across boards and regions.
The system combines rubric‑based and model‑answer evaluation to produce dependable, explainable judgments, then maps findings into teacher‑ready reteach strategies and groups. For nuanced cases, human‑in‑the‑loop review keeps outcomes aligned to your marking schemes and classroom context.
Yes. Action Plan signals feed AI Studio to generate curriculum‑aligned lesson plans, worksheets, and model answers so remediation becomes immediately teachable without separate authoring steps.
Outputs are designed for dashboards and reports, with JSON exports and annotated evidence that support audits and stakeholder transparency. Cohort‑level insights can be monitored so teams can iterate strategies and measure instructional impact over time.
Evaluations target 95% accuracy across subjective and objective tasks and support a human‑in‑the‑loop option for edge cases. This combination delivers consistent plans across cohorts while preserving educator oversight for high‑stakes or ambiguous scenarios.