CrazyGoldFish is the complete AI reasoning layer for education, unifying evaluation, personalization, content generation, and observation into one integrated AI stack.
On the docs home page, what does the “AI reasoning layer” encompass and how are modules organized?
CrazyGoldFish unifies Evaluate → Personalize → Generate → Observe → Engage into one integrated stack. It handles multimodal inputs (text, handwriting, diagrams, audio, video) so evaluation flows directly into content generation, personalization, and observation with a consistent, explainable data model.
What’s the recommended starting path in the docs to embed CrazyGoldFish into an LMS or app?
You can start with the white‑label, embeddable UI that drops in via secure link or iframe with branding controls (colors, fonts). If you prefer code-level control, use the REST APIs and webhooks to create assessments, submit responses, and fetch results using token-based auth.
How do we authenticate with the APIs and separate sandbox from production?
The platform uses token-based auth; teams typically begin with sandbox keys for development and move to role-based tokens in production. This segmentation pairs well with webhooks for result notifications and keeps reviewer, teacher, and system privileges clearly scoped.
Which API returns consolidated grading outputs we should persist?
Use the Final Results API to retrieve exam-level scores, section summaries, model answers, step-wise marking, and teacher feedback fields like isApproved. Many workflows also store the feedback JSON for strengths and improvement steps and link annotated copies for transparency.
How do we submit multimodal responses and specify grading logic?
Ingest PDFs or images for handwriting, attach typed documents, or include audio/video, then provide a model answer and/or rubric via the Evaluation or Assignment APIs. The service normalizes inputs and returns structured scores, explanations, and next-step suggestions.
How is quality controlled for subjective writing or diagram-heavy tasks?
Evaluations target up to 95% accuracy using rubric and model-answer logic, and are safeguarded by a human-in-the-loop review for re-checks and overrides. Audit trails maintain transparency so educators can confirm, edit, or republish before results flow downstream.
What curriculum and data-governance alignments are supported out of the box?
Workflows are CBSE/ICSE/GDPR aligned with options for curriculum mappings, role-based access, data minimization, and audit trails. These controls help standardize rubrics and privacy practices across regions while keeping teachers in control of the final outputs.
Do the docs cover standardized pipelines like IELTS/TOEFL writing?
Yes—IELTS and TOEFL writing evaluations are available via APIs and the white-label UI. Outputs include scores, rubric breakdowns, and actionable suggestions with a human-in-the-loop safeguard for instructor calibration.
What should we expect in the feedback payloads and how should we store them?
Expect structured JSON containing overall and sectional scores, strengths and areas to improve, model answers, and step-wise marking, plus teacher review flags such as isApproved. Many teams persist the Final Results and link annotated copies, then export to PDFs/CSVs or sync to their LMS/ERP.
Where do SDKs and webhooks fit in the integration pattern described in the docs?
Use REST APIs for submissions and retrieval, SDKs to accelerate wiring, and webhooks to notify your app when processing completes. This pattern decouples ingestion from result fetching and scales cleanly across multimodal workloads and review queues.