Skip to main content
This flow shows how a single exam script travels across all layers of CrazyGoldFish’s AI reasoning stack:
Evaluation → Personalization → Content Generation → Observation → back to Evaluation. It demonstrates how the system closes the loop to deliver measurable outcomes.

Step-by-Step Journey

1

Exam Evaluation

  • Student’s handwritten exam is uploaded.
  • The Exam Evaluation Suite applies rubric/model-answer grading.
  • Outputs: scores, feedback JSON, annotated copies.
2

Personalization with Action Plans

  • Student Plans → personalized targets, practice tasks, milestones.
  • Teacher Plans → reteach strategies, group insights, extension content.
  • Parent Plans → activity packs + simplified progress reports.
3

Content Generation in AI Studio

  • Lesson Plan Builder → creates NEP-aligned, editable teaching scripts.
  • Worksheet Builder → generates remedial practice worksheets.
  • Outputs: ready-to-teach lesson packets, practice sets, and answer keys.
4

Classroom Observation via ClassTrack

  • Class sessions recorded and analyzed for pedagogy, engagement, and pacing.
  • Teacher Growth Reports, Parent Engagement Reports, Principal Dashboards are generated.
5

Continuous Improvement Loop

  • Observation data refines rubrics and evaluation criteria.
  • Teaching insights adjust lesson plans and worksheets in AI Studio.
  • Ecosystem loop resets for the next cycle of exams/assignments.

Stakeholder Benefits

  • Students → Clear progress pathways, faster remediation, adaptive practice.
  • Teachers → Save 8–10 hours weekly on prep, guided reteaching strategies.
  • Leaders → Access dashboards for performance, compliance, and governance.
  • Parents → Transparent progress updates and activity packs.

Ecosystem Integration

  • Evaluation Layer → Exam/Assignment grading.
  • Personalization Layer → Action Plans for Students, Teachers, Parents.
  • Content Generation Layer → Lesson Plans + Worksheets from AI Studio.
  • Observation Layer → ClassTrack analysis closes the loop.

FAQ

Student work is evaluated against rubrics and model answers across text, handwriting, diagrams, audio, and video, with optional human‑in‑the‑loop review before publishing. Those structured results drive personalized action plans and curriculum‑aligned content generation, while observation signals and dashboards close the loop for continuous improvement. You can implement the flow via APIs or a white‑label embeddable UI, keeping the experience native to your platform.
The platform produces scores, section summaries, step‑wise marking, model answers, teacher feedback, and approval flags, plus feedback JSON and annotated copies for exports. These can be retrieved via APIs such as the Final Results API, with webhooks notifying when artifacts are ready to be consumed by downstream personalization and content generation. Dashboards aggregate these outputs for classroom and leadership insights.
A human‑in‑the‑loop layer lets reviewers validate, override, or approve automated scoring and feedback before results are published. Review events and fields (e.g., approval flags like isApproved) are captured with auditability, ensuring educator control for high‑stakes scenarios. This preserves quality while keeping the automated pipeline efficient at scale.
The evaluation layer ingests text, handwritten scripts, diagrams, audio, and video, applying rubric‑ or model‑answer‑based logic designed to target up to 95% accuracy. Outputs are normalized into structured artifacts—scores, feedback JSON, annotated copies—that feed action plans and content generation. Optional checkpoints maintain quality without breaking the automated flow.
Workflows can be configured to be CBSE/ICSE/GDPR aligned, with rubrics and model answers mapped to board expectations and privacy controls. Role‑based access, audit trails, and transparent reporting ensure compliance from evaluation through personalization and content generation. These controls carry through APIs and the white‑label UI.
Use webhooks to receive evaluation completion and publish events, then query APIs to fetch final artifacts (scores, section breakdowns, step‑wise marking, annotated copies). Downstream services can trigger action‑plan creation and content generation upon these callbacks, keeping the Evaluate → Personalize → Generate chain in sync. Observation dashboards consume the same artifacts for governance and reteach planning.
After grading, feedback JSON, score breakdowns, and annotations flow into the Personalization Layer to create targeted Student, Teacher, and Parent Action Plans. The system can auto‑generate NEP‑aligned lesson plans and worksheets tied to observed gaps, with educators able to refine recommendations via a human‑in‑the‑loop step. This ensures remediation content is grounded in real performance.
Observation brings classroom and usage signals back into the system, informing reteach strategies and content iteration. Leadership dashboards surface trends across classes and subjects, while audit trails capture human‑in‑the‑loop decisions for accountability. These signals continuously improve rubrics, model answers, and generated materials across cycles.