Skip to main content
The Exam Evaluation Suite powers the Evaluate stage of CrazyGoldFish’s AI reasoning layer. This workflow shows how exam responses move from ingestion → evaluation → publishing, and then flow downstream into Personalization and AI Studio.

Step-by-Step Flow

1

Ingest Submissions (Upstream)

Upload handwritten scans (PDF/images) or digital responses.
Create or attach a Model Answer / Rubric to set evaluation criteria.
2

Configure Evaluation

Select evaluation method: Model-Answer.
Add exam metadata (subject, marks, question mapping).
3

Run AI Evaluation

Trigger the finalize step → AI parses multi modal inputs and applies chosen grading logic.
Supports long answer sheets.
4

Review & Re-Evaluate

Inspect scores + feedback JSON in dashboards.
Handle queries, re-checks, and maintain an auditable trail for compliance.
5

Publish

Publish final results and optional annotated answer copies.

FAQ

This workflow covers the Evaluate stage end-to-end: exam responses are ingested, evaluated, and then published. After publishing, results flow downstream into Personalization and AI Studio so the same data powers targeted follow‑ups and creation workflows.
The Evaluation Layer handles multimodal submissions across text, handwriting, diagrams, audio, and video. It parses these inputs during evaluation and standardizes outputs for consistent publishing and downstream use.
The workflow targets up to 95% accuracy on structured, rubric- or model‑answer–aligned tasks. A human‑in‑the‑loop review path is built in for edge cases and overrides to maintain trust and control.
You’ll receive detailed feedback JSON, annotated copies, and scored outputs. These standardized artifacts are ready for export to institutional systems and serve as inputs for Personalization and AI Studio.
Human‑in‑the‑loop review is part of the Evaluate stage to handle edge cases and enable educator overrides. Audit‑ready logs and standardized outputs support governance and quality assurance across cohorts.
Yes—workflows, rubrics, and reporting are CBSE/ICSE/GDPR aligned. Practically, you get audit‑ready logs, standardized outputs, and easy exports that fit school and board compliance requirements.
You can integrate via a white‑label embeddable UI or REST APIs, with no AI team required to get started. Results publish as standardized artifacts that plug into your SIS/LMS and downstream modules.
Publishing produces standardized, audit‑friendly outputs that are easy to export into SIS/LMS dashboards. This keeps evaluations consistent while preserving compliance and traceability.
Once published, the structured scores and feedback JSON become inputs to Personalization and AI Studio. This enables continuous workflows where evaluated responses drive targeted interventions and creation flows.
Yes—the Evaluation Layer is designed for multimodal subjective evaluation across handwriting, diagrams, essays, audio, and video. Accuracy is anchored by rubric/model‑answer alignment with human‑in‑the‑loop safeguards for scale and reliability.