Welcome to Sugar-AI

Sugar-AI is a specialized coding assistant designed to help children learn programming with the Sugar Learning Platform. It uses advanced AI to provide child-friendly explanations and code examples.

How Sugar-AI Can Help You

For Students & Teachers

Sugar-AI helps students and teachers learn coding with Sugar Labs tools:

  • Ask questions about Pygame, GTK+3, and Sugar Toolkit
  • Get explanations in child-friendly language
  • Learn programming concepts step by step
  • Receive examples tailored to Sugar activities
  • Get kid-friendly debugging suggestions for your python programs

For Developers

Integrate Sugar-AI with your applications using our simple API:

  • RESTful API endpoints for easy integration
  • Secure authentication with API keys
  • RAG (Retrieval-Augmented Generation) for accurate answers
  • Documented endpoints for developers

For Administrators

Manage Sugar-AI and monitor usage through the admin panel:

  • Approve or deny API key requests
  • Monitor system quotas
  • Manage user permissions
  • Control access to advanced features

Your Sugar-AI Dashboard

Dashboard Overview

The Sugar-AI Dashboard is your personal control center, accessible after signing in with OAuth or an API key. Here's what you can do:

Interactive Chat

Ask questions directly to the AI and receive instant responses about Sugar development

RAG Toggle

Switch between knowledge-enhanced responses and direct language model responses

Usage Metrics

Track your API usage, remaining quota.

Dashboard Benefits

Access Dashboard

API Usage

Using the Sugar-AI API

Integrate Sugar-AI's capabilities into your own applications with our straightforward API:

Python Example

import requests

# Make a request to the Sugar-AI API
api_key = "YOUR_API_KEY"
url = "https://sugar-ai.sugarlabs.org/ask"
params = {"question": "How do I create a Pygame window?"}
headers = {"X-API-Key": api_key}

response = requests.post(url, params=params, headers=headers)
result = response.json()

print(result["answer"])

API Endpoints Overview

Endpoint Purpose Input Format Key Features
/ask RAG-enabled answers Query parameter • Retrieval-Augmented Generation
• Sugar/Pygame/GTK documentation
• Child-friendly responses
/ask-llm Direct LLM without RAG Query parameter • No document retrieval
• Direct model access
• Faster responses
/ask-llm-prompted Custom prompt with advanced controls JSON body • Custom system prompts
• Configurable generation parameters
• Maximum flexibility
/debug Python code debugging Query parameters • Kid-friendly debugging suggestions
• Context mode for explanations
• Educational focus

Quick Reference

Get Your API Key

API Endpoints Documentation

Ask Question (RAG Enhanced)

POST
/ask?question={your_question}

Ask a coding question using Retrieval-Augmented Generation for enhanced context and precision.

Headers:

X-API-Key: your_api_key

Example Request:

curl -X POST "http://localhost:8000/ask?question=How%20do%20I%20create%20a%20Python%20class?" \
    -H "X-API-Key: your_api_key"

Response Format:

{
    "answer": "Detailed explanation about Python classes...",
    "user": "Your Username",
    "quota": {"remaining": 95, "total": 100}
  }

Debug Python Code

POST
/debug?code={your_python_code}&context={true|false}

Send your python code to get debugging suggestions, with context mode for kid-friendly code explanations.

Headers:

X-API-Key: your_api_key

Params:

code: your_python_code,
context: true | false  # turn on context mode (explains code in a kid-friendly way)

Example Request:

curl -X POST "http://localhost:8000/debug?code=How%20do%20I%20create%20a%20Python%20class&context=False?" \
    -H "X-API-Key: your_api_key"

Response Format:

{
    "answer": "Debugging suggestions for your python code....",
    "user": "Your Username",
    "quota": {"remaining": 95, "total": 100}
  }

Direct LLM Question

POST
/ask-llm?question={your_question}

Ask a question directly to the LLM without RAG enhancement.

Headers:

X-API-Key: your_api_key

Example Request:

curl -X POST "http://localhost:8000/ask-llm?question=What%20is%20the%20difference%20between%20lists%20and%20tuples?" \
    -H "X-API-Key: your_api_key"

Custom Prompt with Advanced Controls

POST
/ask-llm-prompted

A powerful endpoint that allows custom system prompts and configurable generation parameters for maximum flexibility.

Headers:

X-API-Key: your_api_key
Content-Type: application/json

Features:

  • Custom system prompts for specific use cases
  • Configurable generation parameters
  • Direct LLM access without RAG
  • Suitable for specialized activities and applications

Basic Usage:

curl -X POST "http://localhost:8000/ask-llm-prompted" \
  -H "X-API-Key: your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "question": "How do I create a Pygame window?",
    "custom_prompt": "You are a Python expert. Provide detailed code examples with explanations."
  }'

Advanced Usage with Generation Parameters:

curl -X POST "http://localhost:8000/ask-llm-prompted" \
  -H "X-API-Key: your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "question": "Write a function to calculate fibonacci numbers",
    "custom_prompt": "You are a coding tutor. Explain step-by-step with comments.",
    "max_length": 1024,
    "truncation": true,
    "repetition_penalty": 1.1,
    "temperature": 0.7,
    "top_p": 0.9,
    "top_k": 50
  }'

Request Parameters:

  • question (required): The question or task to process
  • custom_prompt (required): Your custom system prompt
  • max_length (optional, default: 1024): Maximum length of generated response
  • truncation (optional, default: true): Whether to truncate long inputs
  • repetition_penalty (optional, default: 1.1): Controls repetition (1.0 = no penalty, >1.0 = less repetition)
  • temperature (optional, default: 0.7): Controls randomness (0.0 = deterministic, 1.0 = very random)
  • top_p (optional, default: 0.9): Nucleus sampling (0.1 = focused, 0.9 = diverse)
  • top_k (optional, default: 50): Limits vocabulary to K most likely words

Response Format:

{
  "answer": "Here's how to create a Pygame window:...",
  "user": "Your Username",
  "quota": {"remaining": 95, "total": 100},
  "generation_params": {
    "max_length": 1024,
    "truncation": true,
    "repetition_penalty": 1.1,
    "temperature": 0.7,
    "top_p": 0.9,
    "top_k": 50
  }
}

Generation Parameter Guidelines:

  • For Code: temperature: 0.2-0.4, top_p: 0.8, repetition_penalty: 1.1
  • For Creative Content: temperature: 0.7-0.9, top_p: 0.9, repetition_penalty: 1.2
  • For Factual Answers: temperature: 0.3-0.5, top_p: 0.7, repetition_penalty: 1.0

Use Cases:

Different activities can use different system prompts and generation parameters to achieve a model that is personalized to specific activity needs. Some example are:

  • Speak-AI activity can use this endpoint with custom prompts for different personas in the chatbot
  • Pippy Activity can use this endpoint with custom prompts to make a code-debugger

Change Model (Admin Only)

POST
/change-model?model={model_name}&api_key={admin_key}&password={admin_password}

Change the LLM model being used for responses. Requires admin API key and password.

Example Request:

curl -X POST "http://localhost:8000/change-model?model=Qwen/Qwen2-1.5B-Instruct&api_key=admin_key&password=admin_password"