The Exam Evaluation Suite powers the Evaluate stage of CrazyGoldFish’s AI reasoning layer. This workflow shows how exam responses move from ingestion → evaluation → publishing, and then flow downstream into Personalization and AI Studio.
Step-by-Step Flow
1
Ingest Submissions (Upstream)
Upload handwritten scans (PDF/images) or digital responses.
Create or attach a Model Answer / Rubric to set evaluation criteria.
Create or attach a Model Answer / Rubric to set evaluation criteria.
2
Configure Evaluation
Select evaluation method: Model-Answer.
Add exam metadata (subject, marks, question mapping).
Add exam metadata (subject, marks, question mapping).
3
Run AI Evaluation
Trigger the finalize step → AI parses multi modal inputs and applies chosen grading logic.
Supports long answer sheets.
Supports long answer sheets.
4
Review & Re-Evaluate
Inspect scores + feedback JSON in dashboards.
Handle queries, re-checks, and maintain an auditable trail for compliance.
Handle queries, re-checks, and maintain an auditable trail for compliance.
5
Publish
Publish final results and optional annotated answer copies.