Skip to main content

Configuration

QALITA Studio offers great flexibility in configuring AI providers and their parameters.

Configuration Architecture

Studio's LLM configuration is managed through QALITA Platform's Settings > AI Configuration interface. Configurations are stored in the Platform database and linked to your organization (partner).

Configuration Model

Each LLM configuration includes:

FieldDescription
nameDisplay name for the configuration
providerProvider type (openai, anthropic, ollama, etc.)
model_nameSpecific model to use
api_keyAPI key for authentication (encrypted)
endpoint_urlCustom endpoint URL (for Ollama, Azure, generic)
is_activeWhether this is the active configuration
configurationAdditional parameters (temperature, timeout, etc.)

Multiple Configurations

You can create multiple LLM configurations for different use cases:

  • Development: Use Ollama for free local testing
  • Production: Use GPT-4o for high-quality responses
  • Cost-optimized: Use GPT-4o-mini for routine tasks

Available Providers

1. Ollama (Local)

The local provider uses Ollama to run open-source models locally.

Advantages:

  • ✅ Free and unlimited
  • ✅ Total privacy (local data)
  • ✅ No network dependency
  • ✅ Many models available

Platform Configuration:

FieldValue
ProviderOllama
Endpoint URLhttp://localhost:11434 (or your host)
Modelllama3.2, qwen2.5:7b, etc.
API KeyNot required

Recommended Models:

ModelSizeUsagePerformance
qwen2.5:7b7BGeneral, multilingual⭐⭐⭐⭐⭐
llama3.23BFast, lightweight⭐⭐⭐⭐
mistral:7b7BReasoning⭐⭐⭐⭐⭐
phi3:medium14BPrecise⭐⭐⭐⭐
deepseek-coder:6.7b6.7BSQL/Python code⭐⭐⭐⭐⭐

Installing a model:

# Download a model
ollama pull qwen2.5:7b

# List installed models
ollama list

# Test a model
ollama run qwen2.5:7b "Hello!"

Verification:

Test endpoint: http://127.0.0.1:11434/api/tags

Network Access

Ensure the Platform backend can reach the Ollama server. If Ollama runs on a different host, use its IP address or hostname.

2. OpenAI (ChatGPT)

Access GPT-4, GPT-4o, and GPT-3.5 models.

Platform Configuration:

FieldValue
ProviderOpenAI
API Keysk-proj-...
Modelgpt-4o, gpt-4o-mini, gpt-3.5-turbo
Endpoint URLNot required (uses default)

Available Models:

ModelContextCostUsage
gpt-4o128K$$Best quality/price ratio
gpt-4o-mini128K$Fast and economical
gpt-3.5-turbo16K$Basic, fast

Getting an API key:

  1. Create an account on platform.openai.com
  2. Go to API Keys
  3. Create a new secret key
  4. Copy it (it won't be visible again)

Pricing: Check openai.com/pricing

3. Azure OpenAI

Enterprise-grade OpenAI models on Azure infrastructure.

Platform Configuration:

FieldValue
ProviderAzure OpenAI
Endpoint URLhttps://your-resource.openai.azure.com/
API KeyYour Azure OpenAI API key
ModelYour deployed model name

Advantages:

  • Enterprise compliance and security
  • Regional data residency
  • SLA guarantees
  • Integration with Azure services

Getting started:

  1. Create an Azure OpenAI resource in Azure Portal
  2. Deploy a model (gpt-4o, gpt-4, etc.)
  3. Get the endpoint URL and API key from the resource

4. Mistral AI

High-quality French models.

Platform Configuration:

FieldValue
ProviderMistral
API KeyFrom console.mistral.ai
Modelmistral-large-latest, mistral-small-latest

Available Models:

ModelContextPerformance
mistral-large-latest128KExcellent
mistral-small-latest32KGood
open-mistral-7b32KDecent

Getting an API key:

  1. Sign up on console.mistral.ai
  2. Create an API Key
  3. Top up your credits if necessary

Advantages:

  • Excellent for French language
  • Good context understanding
  • Competitive pricing

5. Claude (Anthropic)

Claude 3 models for advanced reasoning.

Platform Configuration:

FieldValue
ProviderAnthropic
API Keysk-ant-... from console.anthropic.com
Modelclaude-3-5-sonnet-20241022, claude-3-opus-20240229

Available Models:

ModelContextCapabilities
claude-3-5-sonnet-20241022200KVersatile excellence
claude-3-opus-20240229200KComplex reasoning
claude-3-haiku-20240307200KFast and lightweight

Getting an API key:

  1. Create an account on console.anthropic.com
  2. Go to API Keys
  3. Generate a new key

Features:

  • Excellent for complex analysis
  • Very good in multiple languages
  • Context up to 200K tokens

6. Generic Provider (OpenAI-compatible) 🔧

Connect to any OpenAI-compatible API endpoint (vLLM, LM Studio, etc.).

Platform Configuration:

FieldValue
ProviderGeneric
Endpoint URLYour server URL (e.g., http://localhost:8000/v1)
API KeyServer API key (if required)
ModelModel name as expected by the server

Use cases:

  • Self-hosted LLM servers (vLLM, text-generation-inference)
  • LM Studio local deployment
  • Custom fine-tuned models
  • Air-gapped environments

Configuration via Platform Interface

Add a Configuration

  1. Go to Settings in Platform
  2. Navigate to AI Configuration
  3. Click Add Configuration
  4. Fill in the required fields:
    • Name: Display name for this configuration
    • Provider: Select from dropdown
    • API Key: For cloud providers
    • Model: Exact model name
    • Endpoint URL: For Ollama, Azure, or Generic
  5. Click Test Connection to verify
  6. If ✅ success, click Save

Activate a Configuration

  1. In AI Configuration list
  2. Click on the desired configuration
  3. Toggle Active to enable
  4. Only one configuration can be active at a time

Edit or Delete

  1. Click on a configuration to edit
  2. Use the Delete button to remove

Configuration via API

LLM configurations are managed through the Platform's standard REST API.

Get Agent Capabilities

GET /api/v1/studio/agent/capabilities

Returns the current agent status and active configuration:

{
"agent_available": true,
"llm_configured": true,
"active_config": {
"id": 1,
"name": "Production OpenAI",
"provider": "openai",
"model_name": "gpt-4o-mini",
"endpoint_url": null
}
}

Get Studio Status

GET /api/v1/studio/status

Returns worker connectivity status:

{
"connected_workers": [1, 3],
"worker_count": 2
}

Backend Dependencies

For the agent module to be available, install the required Python packages on the Platform backend:

pip install langchain langgraph langchain-openai langchain-anthropic langchain-ollama

These are optional dependencies. If not installed, the agent_available field will be false.

Advanced Configuration

Custom Models (Ollama)

You can create your own models with Ollama:

# Create a Modelfile
cat > Modelfile <<EOF
FROM qwen2.5:7b
SYSTEM "You are a data quality expert. Always answer professionally."
PARAMETER temperature 0.7
PARAMETER top_p 0.9
EOF

# Create the model
ollama create qalita-expert -f Modelfile

# Use in Platform configuration
# model: "qalita-expert"

Additional Configuration Parameters

The configuration field in Platform supports extra parameters:

{
"temperature": 0.0,
"timeout_seconds": 60.0,
"max_retries": 2
}
ParameterTypeDefaultDescription
temperaturefloat0.0Creativity (0.0 = deterministic, 1.0 = creative)
timeout_secondsfloat60.0Request timeout
max_retriesint2Retry attempts on failure

Security

API Key Protection

  • API keys are stored encrypted in the Platform database
  • Keys are never exposed in API responses
  • Access is controlled by Platform authentication

Data Privacy

Studio conversations may include:

  • Data samples from sources
  • Quality metrics and recommendations
  • Schema information

Ensure your LLM provider's data handling policies align with your organization's requirements. For maximum privacy, use Ollama with local models.

Troubleshooting

"Agent module not available"

Install the required dependencies on the Platform backend:

pip install langchain langgraph langchain-openai langchain-anthropic langchain-ollama

"No LLM configuration found"

  1. Go to Settings > AI Configuration
  2. Create a new configuration
  3. Set it as Active

"Model not found in Ollama"

# Check that the model is installed
ollama list

# Install it if necessary
ollama pull <model-name>

Authentication error (401/403)

  • Verify that your API key is valid
  • Test directly with the provider's API
  • Check that you have available credits

Connection timeout

Check network connectivity from the Platform backend to the LLM provider:

# For Ollama
curl http://localhost:11434/api/tags

# For OpenAI
curl https://api.openai.com/v1/models \
-H "Authorization: Bearer YOUR_KEY"

Best Practices

  1. Start with Local: Test with Ollama first before using paid APIs
  2. Adapted Models: Choose the model according to your use case
  3. Cost Monitoring: Use gpt-4o-mini for development and testing
  4. Security: Rotate API keys periodically
  5. Multiple Configs: Create configurations for different environments

Next Steps