Quick Start
This guide helps you get started with QALITA Studio in minutes.
Prerequisites
- QALITA Platform deployed and accessible
- Administrator access to configure LLM settings
- Ollama installed (optional, for local models)
Step 1: Access QALITA Platform
QALITA Studio is integrated into QALITA Platform. Access your Platform instance:
- Open your browser and navigate to your Platform URL
- Log in with your credentials
- In the sidebar, click on Studio
Step 2: Configure LLM Provider
Before using Studio, an administrator must configure an LLM provider.
Navigate to AI Settings
- Go to Settings in the Platform
- Select AI Configuration
- Click Add Configuration
Configure a Provider
Option A: Ollama (Local - Recommended to start)
First, install Ollama on your Worker server or a reachable host:
# macOS / Linux
curl -fsSL https://ollama.ai/install.sh | sh
# Start Ollama
ollama serve
# Download a model
ollama pull llama3.2
Then configure in Platform:
- Provider:
Ollama - Endpoint URL:
http://localhost:11434(or your Ollama host) - Model:
llama3.2
Option B: OpenAI
- Provider:
OpenAI - API Key: Your OpenAI API key (
sk-...) - Model:
gpt-4o,gpt-4o-mini, orgpt-3.5-turbo
Option C: Azure OpenAI
- Provider:
Azure OpenAI - Endpoint URL: Your Azure endpoint
- API Key: Your Azure API key
- Model: Your deployed model name
Option D: Anthropic (Claude)
- Provider:
Anthropic - API Key: From console.anthropic.com
- Model:
claude-3-5-sonnet-20241022,claude-3-opus-20240229
Option E: Mistral AI
- Provider:
Mistral - API Key: From console.mistral.ai
- Model:
mistral-large-latest,mistral-small-latest
Activate Configuration
- Click Test Connection to verify
- If successful, click Save
- Enable the configuration as Active
Step 3: Ensure Worker Connection
Studio uses Workers to access data sources. Verify a Worker is connected:
- Go to Workers in Platform
- Check that at least one Worker shows as Connected
- If no Worker is connected, start one with the CLI:
qalita worker start
Step 4: First Conversation
- Navigate to Studio in the sidebar
- Select a Source from the context panel (optional but recommended)
- Type your first message, for example:
Hello! Can you describe this data source? - Press Enter or click Send
- Watch the response streaming in real-time
Example Prompts
General questions:
- "Explain what data quality means"
- "What are the main types of data anomalies?"
- "Generate a SQL script to detect duplicates"
With source context:
- "Describe the schema of this source"
- "Show me a sample of 10 records"
- "Which columns have null values?"
With issue context:
- "Analyze this quality issue and suggest solutions"
- "What patterns do you see in the affected data?"
- "Propose a fix for this anomaly"
Step 5: Using Data Tools
Studio can execute operations on your data sources through the Worker:
Describe a Source
Describe the schema and structure of this source
The agent will call the describe_source tool and return column names, types, and row counts.
Query Data
Execute: SELECT * FROM customers WHERE email IS NULL LIMIT 10
The agent can run SQL queries on database sources.
Sample Data
Show me a random sample of 20 records
The agent will fetch a random sample from the source.
Verification
Check Agent Capabilities
Access the API endpoint to verify Studio is properly configured:
curl -H "Authorization: Bearer YOUR_TOKEN" \
https://your-platform/api/v1/studio/agent/capabilities
Should return:
{
"agent_available": true,
"llm_configured": true,
"active_config": {
"id": 1,
"name": "Production LLM",
"provider": "openai",
"model_name": "gpt-4o-mini"
}
}
Check Worker Status
curl -H "Authorization: Bearer YOUR_TOKEN" \
https://your-platform/api/v1/studio/status
Should return connected workers:
{
"connected_workers": [1],
"worker_count": 1
}
Troubleshooting
"Agent module not available"
The LangChain/LangGraph dependencies are not installed on the backend. Contact your administrator to install:
pip install langchain langgraph langchain-openai langchain-anthropic langchain-ollama
"No LLM configuration found"
No active LLM configuration exists for your organization. An administrator needs to configure one in Settings > AI Configuration.
"No worker available"
No Worker is connected. Start a Worker with:
qalita worker start
Ollama connection issues
# Verify Ollama is running
curl http://localhost:11434/api/tags
# Check if model is installed
ollama list
# Pull missing model
ollama pull llama3.2
Streaming not working
If responses appear all at once instead of streaming, check:
- The LLM provider supports streaming
- No proxy is buffering responses
- Try the non-streaming endpoint as fallback
Next Steps
- 📖 Configuration - Configure multiple providers and manage settings
- 🚀 Features - Discover all Studio capabilities
- 💬 Conversation Management - Organize your conversations
- 🔧 Platform Integration - Deep dive into Platform integration
Support
Need help? Check our complete documentation or contact support.