Skip to main content

Quick Start

This guide helps you get started with QALITA Studio in minutes.

Prerequisites

  • QALITA Platform deployed and accessible
  • Administrator access to configure LLM settings
  • Ollama installed (optional, for local models)

Step 1: Access QALITA Platform

QALITA Studio is integrated into QALITA Platform. Access your Platform instance:

  1. Open your browser and navigate to your Platform URL
  2. Log in with your credentials
  3. In the sidebar, click on Studio

Step 2: Configure LLM Provider

Before using Studio, an administrator must configure an LLM provider.

  1. Go to Settings in the Platform
  2. Select AI Configuration
  3. Click Add Configuration

Configure a Provider

First, install Ollama on your Worker server or a reachable host:

# macOS / Linux
curl -fsSL https://ollama.ai/install.sh | sh

# Start Ollama
ollama serve

# Download a model
ollama pull llama3.2

Then configure in Platform:

  • Provider: Ollama
  • Endpoint URL: http://localhost:11434 (or your Ollama host)
  • Model: llama3.2

Option B: OpenAI

  • Provider: OpenAI
  • API Key: Your OpenAI API key (sk-...)
  • Model: gpt-4o, gpt-4o-mini, or gpt-3.5-turbo

Option C: Azure OpenAI

  • Provider: Azure OpenAI
  • Endpoint URL: Your Azure endpoint
  • API Key: Your Azure API key
  • Model: Your deployed model name

Option D: Anthropic (Claude)

  • Provider: Anthropic
  • API Key: From console.anthropic.com
  • Model: claude-3-5-sonnet-20241022, claude-3-opus-20240229

Option E: Mistral AI

  • Provider: Mistral
  • API Key: From console.mistral.ai
  • Model: mistral-large-latest, mistral-small-latest

Activate Configuration

  1. Click Test Connection to verify
  2. If successful, click Save
  3. Enable the configuration as Active

Step 3: Ensure Worker Connection

Studio uses Workers to access data sources. Verify a Worker is connected:

  1. Go to Workers in Platform
  2. Check that at least one Worker shows as Connected
  3. If no Worker is connected, start one with the CLI:
    qalita worker start

Step 4: First Conversation

  1. Navigate to Studio in the sidebar
  2. Select a Source from the context panel (optional but recommended)
  3. Type your first message, for example:
    Hello! Can you describe this data source?
  4. Press Enter or click Send
  5. Watch the response streaming in real-time

Example Prompts

General questions:

  • "Explain what data quality means"
  • "What are the main types of data anomalies?"
  • "Generate a SQL script to detect duplicates"

With source context:

  • "Describe the schema of this source"
  • "Show me a sample of 10 records"
  • "Which columns have null values?"

With issue context:

  • "Analyze this quality issue and suggest solutions"
  • "What patterns do you see in the affected data?"
  • "Propose a fix for this anomaly"

Step 5: Using Data Tools

Studio can execute operations on your data sources through the Worker:

Describe a Source

Describe the schema and structure of this source

The agent will call the describe_source tool and return column names, types, and row counts.

Query Data

Execute: SELECT * FROM customers WHERE email IS NULL LIMIT 10

The agent can run SQL queries on database sources.

Sample Data

Show me a random sample of 20 records

The agent will fetch a random sample from the source.

Verification

Check Agent Capabilities

Access the API endpoint to verify Studio is properly configured:

curl -H "Authorization: Bearer YOUR_TOKEN" \
https://your-platform/api/v1/studio/agent/capabilities

Should return:

{
"agent_available": true,
"llm_configured": true,
"active_config": {
"id": 1,
"name": "Production LLM",
"provider": "openai",
"model_name": "gpt-4o-mini"
}
}

Check Worker Status

curl -H "Authorization: Bearer YOUR_TOKEN" \
https://your-platform/api/v1/studio/status

Should return connected workers:

{
"connected_workers": [1],
"worker_count": 1
}

Troubleshooting

"Agent module not available"

The LangChain/LangGraph dependencies are not installed on the backend. Contact your administrator to install:

pip install langchain langgraph langchain-openai langchain-anthropic langchain-ollama

"No LLM configuration found"

No active LLM configuration exists for your organization. An administrator needs to configure one in Settings > AI Configuration.

"No worker available"

No Worker is connected. Start a Worker with:

qalita worker start

Ollama connection issues

# Verify Ollama is running
curl http://localhost:11434/api/tags

# Check if model is installed
ollama list

# Pull missing model
ollama pull llama3.2

Streaming not working

If responses appear all at once instead of streaming, check:

  1. The LLM provider supports streaming
  2. No proxy is buffering responses
  3. Try the non-streaming endpoint as fallback

Next Steps

Support

Need help? Check our complete documentation or contact support.