Skip to main content

FAQ

CrazyGoldFish unifies Evaluate → Personalize → Generate → Observe → Engage into one integrated stack. It handles multimodal inputs (text, handwriting, diagrams, audio, video) so evaluation flows directly into content generation, personalization, and observation with a consistent, explainable data model.
The platform uses token-based auth; teams typically begin with sandbox keys for development and move to role-based tokens in production. This segmentation pairs well with webhooks for result notifications and keeps reviewer, teacher, and system privileges clearly scoped.
Use the Final Results API to retrieve exam-level scores, section summaries, model answers, step-wise marking, and teacher feedback fields like isApproved. Many workflows also store the feedback JSON for strengths and improvement steps and link annotated copies for transparency.
Ingest PDFs or images for handwriting, attach typed documents, or include audio/video, then provide a model answer and/or rubric via the Evaluation or Assignment APIs. The service normalizes inputs and returns structured scores, explanations, and next-step suggestions.
Evaluations target up to 95% accuracy using rubric and model-answer logic, and are safeguarded by a human-in-the-loop review for re-checks and overrides. Audit trails maintain transparency so educators can confirm, edit, or republish before results flow downstream.
Workflows are CBSE/ICSE/GDPR aligned with options for curriculum mappings, role-based access, data minimization, and audit trails. These controls help standardize rubrics and privacy practices across regions while keeping teachers in control of the final outputs.
Yes—IELTS and TOEFL writing evaluations are available via APIs and the white-label UI. Outputs include scores, rubric breakdowns, and actionable suggestions with a human-in-the-loop safeguard for instructor calibration.
Expect structured JSON containing overall and sectional scores, strengths and areas to improve, model answers, and step-wise marking, plus teacher review flags such as isApproved. Many teams persist the Final Results and link annotated copies, then export to PDFs/CSVs or sync to their LMS/ERP.
Use REST APIs for submissions and retrieval, SDKs to accelerate wiring, and webhooks to notify your app when processing completes. This pattern decouples ingestion from result fetching and scales cleanly across multimodal workloads and review queues.