Configuration
QALITA Studio offers great flexibility in configuring AI providers and their parameters.
Configuration Architecture
Studio's LLM configuration is managed through QALITA Platform's Settings > AI Configuration interface. Configurations are stored in the Platform database and linked to your organization (partner).
Configuration Model
Each LLM configuration includes:
| Field | Description |
|---|---|
name | Display name for the configuration |
provider | Provider type (openai, anthropic, ollama, etc.) |
model_name | Specific model to use |
api_key | API key for authentication (encrypted) |
endpoint_url | Custom endpoint URL (for Ollama, Azure, generic) |
is_active | Whether this is the active configuration |
configuration | Additional parameters (temperature, timeout, etc.) |
Multiple Configurations
You can create multiple LLM configurations for different use cases:
- Development: Use Ollama for free local testing
- Production: Use GPT-4o for high-quality responses
- Cost-optimized: Use GPT-4o-mini for routine tasks
Available Providers
1. Ollama (Local)
The local provider uses Ollama to run open-source models locally.
Advantages:
- ✅ Free and unlimited
- ✅ Total privacy (local data)
- ✅ No network dependency
- ✅ Many models available
Platform Configuration:
| Field | Value |
|---|---|
| Provider | Ollama |
| Endpoint URL | http://localhost:11434 (or your host) |
| Model | llama3.2, qwen2.5:7b, etc. |
| API Key | Not required |
Recommended Models:
| Model | Size | Usage | Performance |
|---|---|---|---|
qwen2.5:7b | 7B | General, multilingual | ⭐⭐⭐⭐⭐ |
llama3.2 | 3B | Fast, lightweight | ⭐⭐⭐⭐ |
mistral:7b | 7B | Reasoning | ⭐⭐⭐⭐⭐ |
phi3:medium | 14B | Precise | ⭐⭐⭐⭐ |
deepseek-coder:6.7b | 6.7B | SQL/Python code | ⭐⭐⭐⭐⭐ |
Installing a model:
# Download a model
ollama pull qwen2.5:7b
# List installed models
ollama list
# Test a model
ollama run qwen2.5:7b "Hello!"
Verification:
Test endpoint: http://127.0.0.1:11434/api/tags
Ensure the Platform backend can reach the Ollama server. If Ollama runs on a different host, use its IP address or hostname.
2. OpenAI (ChatGPT)
Access GPT-4, GPT-4o, and GPT-3.5 models.
Platform Configuration:
| Field | Value |
|---|---|
| Provider | OpenAI |
| API Key | sk-proj-... |
| Model | gpt-4o, gpt-4o-mini, gpt-3.5-turbo |
| Endpoint URL | Not required (uses default) |
Available Models:
| Model | Context | Cost | Usage |
|---|---|---|---|
gpt-4o | 128K | $$ | Best quality/price ratio |
gpt-4o-mini | 128K | $ | Fast and economical |
gpt-3.5-turbo | 16K | $ | Basic, fast |
Getting an API key:
- Create an account on platform.openai.com
- Go to API Keys
- Create a new secret key
- Copy it (it won't be visible again)
Pricing: Check openai.com/pricing
3. Azure OpenAI
Enterprise-grade OpenAI models on Azure infrastructure.
Platform Configuration:
| Field | Value |
|---|---|
| Provider | Azure OpenAI |
| Endpoint URL | https://your-resource.openai.azure.com/ |
| API Key | Your Azure OpenAI API key |
| Model | Your deployed model name |
Advantages:
- Enterprise compliance and security
- Regional data residency
- SLA guarantees
- Integration with Azure services
Getting started:
- Create an Azure OpenAI resource in Azure Portal
- Deploy a model (gpt-4o, gpt-4, etc.)
- Get the endpoint URL and API key from the resource
4. Mistral AI
High-quality French models.
Platform Configuration:
| Field | Value |
|---|---|
| Provider | Mistral |
| API Key | From console.mistral.ai |
| Model | mistral-large-latest, mistral-small-latest |
Available Models:
| Model | Context | Performance |
|---|---|---|
mistral-large-latest | 128K | Excellent |
mistral-small-latest | 32K | Good |
open-mistral-7b | 32K | Decent |
Getting an API key:
- Sign up on console.mistral.ai
- Create an API Key
- Top up your credits if necessary
Advantages:
- Excellent for French language
- Good context understanding
- Competitive pricing
5. Claude (Anthropic)
Claude 3 models for advanced reasoning.
Platform Configuration:
| Field | Value |
|---|---|
| Provider | Anthropic |
| API Key | sk-ant-... from console.anthropic.com |
| Model | claude-3-5-sonnet-20241022, claude-3-opus-20240229 |
Available Models:
| Model | Context | Capabilities |
|---|---|---|
claude-3-5-sonnet-20241022 | 200K | Versatile excellence |
claude-3-opus-20240229 | 200K | Complex reasoning |
claude-3-haiku-20240307 | 200K | Fast and lightweight |
Getting an API key:
- Create an account on console.anthropic.com
- Go to API Keys
- Generate a new key
Features:
- Excellent for complex analysis
- Very good in multiple languages
- Context up to 200K tokens
6. Generic Provider (OpenAI-compatible) 🔧
Connect to any OpenAI-compatible API endpoint (vLLM, LM Studio, etc.).
Platform Configuration:
| Field | Value |
|---|---|
| Provider | Generic |
| Endpoint URL | Your server URL (e.g., http://localhost:8000/v1) |
| API Key | Server API key (if required) |
| Model | Model name as expected by the server |
Use cases:
- Self-hosted LLM servers (vLLM, text-generation-inference)
- LM Studio local deployment
- Custom fine-tuned models
- Air-gapped environments
Configuration via Platform Interface
Add a Configuration
- Go to Settings in Platform
- Navigate to AI Configuration
- Click Add Configuration
- Fill in the required fields:
- Name: Display name for this configuration
- Provider: Select from dropdown
- API Key: For cloud providers
- Model: Exact model name
- Endpoint URL: For Ollama, Azure, or Generic
- Click Test Connection to verify
- If ✅ success, click Save
Activate a Configuration
- In AI Configuration list
- Click on the desired configuration
- Toggle Active to enable
- Only one configuration can be active at a time
Edit or Delete
- Click on a configuration to edit
- Use the Delete button to remove
Configuration via API
LLM configurations are managed through the Platform's standard REST API.
Get Agent Capabilities
GET /api/v1/studio/agent/capabilities
Returns the current agent status and active configuration:
{
"agent_available": true,
"llm_configured": true,
"active_config": {
"id": 1,
"name": "Production OpenAI",
"provider": "openai",
"model_name": "gpt-4o-mini",
"endpoint_url": null
}
}
Get Studio Status
GET /api/v1/studio/status
Returns worker connectivity status:
{
"connected_workers": [1, 3],
"worker_count": 2
}
Backend Dependencies
For the agent module to be available, install the required Python packages on the Platform backend:
pip install langchain langgraph langchain-openai langchain-anthropic langchain-ollama
These are optional dependencies. If not installed, the agent_available field will be false.
Advanced Configuration
Custom Models (Ollama)
You can create your own models with Ollama:
# Create a Modelfile
cat > Modelfile <<EOF
FROM qwen2.5:7b
SYSTEM "You are a data quality expert. Always answer professionally."
PARAMETER temperature 0.7
PARAMETER top_p 0.9
EOF
# Create the model
ollama create qalita-expert -f Modelfile
# Use in Platform configuration
# model: "qalita-expert"
Additional Configuration Parameters
The configuration field in Platform supports extra parameters:
{
"temperature": 0.0,
"timeout_seconds": 60.0,
"max_retries": 2
}
| Parameter | Type | Default | Description |
|---|---|---|---|
temperature | float | 0.0 | Creativity (0.0 = deterministic, 1.0 = creative) |
timeout_seconds | float | 60.0 | Request timeout |
max_retries | int | 2 | Retry attempts on failure |
Security
API Key Protection
- API keys are stored encrypted in the Platform database
- Keys are never exposed in API responses
- Access is controlled by Platform authentication
Data Privacy
Studio conversations may include:
- Data samples from sources
- Quality metrics and recommendations
- Schema information
Ensure your LLM provider's data handling policies align with your organization's requirements. For maximum privacy, use Ollama with local models.
Troubleshooting
"Agent module not available"
Install the required dependencies on the Platform backend:
pip install langchain langgraph langchain-openai langchain-anthropic langchain-ollama
"No LLM configuration found"
- Go to Settings > AI Configuration
- Create a new configuration
- Set it as Active
"Model not found in Ollama"
# Check that the model is installed
ollama list
# Install it if necessary
ollama pull <model-name>
Authentication error (401/403)
- Verify that your API key is valid
- Test directly with the provider's API
- Check that you have available credits
Connection timeout
Check network connectivity from the Platform backend to the LLM provider:
# For Ollama
curl http://localhost:11434/api/tags
# For OpenAI
curl https://api.openai.com/v1/models \
-H "Authorization: Bearer YOUR_KEY"
Best Practices
- Start with Local: Test with Ollama first before using paid APIs
- Adapted Models: Choose the model according to your use case
- Cost Monitoring: Use
gpt-4o-minifor development and testing - Security: Rotate API keys periodically
- Multiple Configs: Create configurations for different environments
Next Steps
- 🚀 Features - Explore all Studio capabilities
- 💬 Conversation Management - Organize your conversations
- 🔧 Platform Integration - Deep dive into Platform integration