Step-by-step flow for AI-powered assignment evaluation – from multi-modal submissions to feedback, re-evaluation, and downstream Personalization.
The Assignment Evaluation Suite extends CrazyGoldFish’s Evaluation Layer to daily classroom work. It evaluates handwritten, digital, audio, and video submissions, provides interactive feedback, and powers downstream Personalization and AI Studio.
Trigger the finalize step → AI parses multi modal inputs and applies selected grading logic.
Optimized for quick turnaround to fit daily workflows.
4
Feedback & Re-Evaluation
Generate detailed feedback JSON with strengths + areas for improvement.
Students can query results, triggering re-checks with audit logs.
5
Publish & Export (Outputs)
Publish feedback + results to students and teachers.
Export PDF/CSV summaries and update LMS/ERP dashboards.
Data flows downstream into the Personalization Suite for reteach strategies.
How do I ingest multi-modal assignment submissions and attach grading context in the Assignment Evaluation Suite?
Use the Ingest Submissions step to upload handwritten scans, typed documents, audio, or video. Attach the relevant Model Answer and/or Rubric so the evaluation engine has the grading context it needs for each submission.
Can I choose Model-Answer, Rubric, or a hybrid approach when configuring assignment evaluation?
Yes. In Configure Evaluation, select Model-Answer, Rubric, or both—hybrid grading applies rubric criteria alongside the model answer for consistency on subjective work. This selection drives how the AI interprets responses during scoring.
What assignment metadata should I provide, and how is it used across the workflow?
Add subject, marks, and completion date during Configure Evaluation. This metadata contextualizes results for publishing, powers PDF/CSV summaries, and helps your LMS/ERP dashboards present assignment-level insights consistently.
What happens during the finalize step in Run AI Evaluation for assignments?
Trigger the finalize step to start AI processing—CrazyGoldFish parses the multi-modal inputs and applies your chosen grading logic (Model-Answer, Rubric, or hybrid). The flow is optimized for quick turnaround so it fits daily classroom cycles.
What does the feedback JSON include for evaluated assignments, and how is it used?
The system generates a detailed feedback JSON highlighting strengths and areas for improvement for each student. It underpins interactive feedback experiences and feeds downstream systems like the Personalization Suite to target reteach strategies.
How are student queries and re-checks handled, and are audit logs maintained?
Students can query their results, which triggers re-evaluation flows with audit logs for transparency. These logs help institutions track what changed and why, supporting a human-in-the-loop review process where educators retain oversight.
What publish and export options are available, and how do results reach LMS/ERP dashboards?
Publish feedback and results to students and teachers directly from the workflow. Export PDF/CSV summaries and sync standardized outputs to LMS/ERP dashboards so scores and feedback are visible in your existing systems.
How does assignment evaluation data flow into downstream personalization and content generation?
Assignment outputs feed into the Personalization Suite to generate reteach strategies based on detected gaps. The same signals can power AI Studio for curriculum-aligned content, creating a closed loop from evaluation to action.
Is the Assignment Evaluation Suite optimized for daily classroom turnaround, and what enables that speed?
Yes. The workflow streamlines steps—ingestion, configuration, and a single finalize trigger—so multi-modal evaluation runs quickly without manual bottlenecks, fitting everyday assignment cycles.
How does this assignment workflow align with accuracy, review, and compliance expectations?
Re-evaluation with audit logs keeps the process transparent for assignments, and educators can stay human-in-the-loop for critical checks. Across CrazyGoldFish’s Evaluation Layer, workflows are CBSE/ICSE/GDPR aligned and designed to target about 95% accuracy on rubric/model-answer tasks, extending those safeguards to daily classroom work.