Configuration Overview¶
MAESTRO configuration guide covering environment variables, Docker settings, and in-app configuration options.
Configuration Hierarchy¶
MAESTRO uses a layered configuration approach:
- Environment Variables (
.env
file) - System-wide settings - Docker Compose - Infrastructure and volume configuration
- User Interface - User-specific and runtime settings
- CLI Parameters - Command-specific overrides
Settings Precedence
For overlapping settings, the precedence order is:
- Mission/Task-specific settings (highest priority)
- User UI settings (per-user preferences)
- Environment variables (.env file)
- System defaults (lowest priority)
Quick Setup¶
The easiest way to configure MAESTRO is using the interactive setup script:
This script handles: - Network configuration (localhost, LAN, or custom domain) - Security setup (generates secure passwords) - Admin credentials - Port configuration - Timezone settings
For manual setup:
Environment Variable Configuration¶
All environment-based settings are configured in the .env
file.
Network Configuration¶
# Main application port (nginx proxy)
MAESTRO_PORT=80 # Default: 80
# CORS configuration
CORS_ALLOWED_ORIGINS=* # Production: Set to your domain
ALLOW_CORS_WILDCARD=true # Set false in production
Database Configuration¶
# PostgreSQL settings (CHANGE THESE!)
POSTGRES_DB=maestro_db
POSTGRES_USER=maestro_user
POSTGRES_PASSWORD=CHANGE_THIS_PASSWORD
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
# Admin credentials (CHANGE THESE!)
ADMIN_USERNAME=admin
ADMIN_PASSWORD=CHANGE_THIS_PASSWORD
# JWT authentication
JWT_SECRET_KEY=GENERATE_RANDOM_KEY # Use: openssl rand -hex 32
Hardware Configuration¶
# GPU device assignment
BACKEND_GPU_DEVICE=0 # GPU for backend service
DOC_PROCESSOR_GPU_DEVICE=0 # GPU for document processing
CLI_GPU_DEVICE=0 # GPU for CLI operations
# Force CPU mode (for systems without NVIDIA GPUs)
FORCE_CPU_MODE=false # Set true to disable GPU
PREFERRED_DEVICE_TYPE=auto # Options: auto, cuda, rocm, mps, cpu
Performance Configuration¶
# Concurrency Settings (4 layers of control)
# 1. General background tasks (NOT for LLM calls)
MAX_WORKER_THREADS=20 # Web fetches, document processing
# Recommended: 10-50 based on CPU cores
# 2. System-wide LLM API limit
GLOBAL_MAX_CONCURRENT_LLM_REQUESTS=200 # Total across ALL users
# Recommended: 50-500 based on provider limits
# 3. Per-session LLM limit (FALLBACK - users set in UI)
MAX_CONCURRENT_REQUESTS=10 # Default per-session limit
# Note: Users typically override this in UI settings
# 4. Web search is hardcoded to 2 concurrent requests
Understanding Concurrency
- Worker Threads: Handle general tasks like web scraping and file operations
- Global LLM Limit: Prevents overwhelming your AI provider
- Per-Session Limit: Prevents one user from monopolizing resources
- Web Search: Rate-limited to avoid search provider restrictions
Application Settings¶
# Timezone
TZ=America/Chicago
VITE_SERVER_TIMEZONE=America/Chicago
# Logging
LOG_LEVEL=ERROR # Options: DEBUG, INFO, WARNING, ERROR, CRITICAL
Docker Configuration¶
Volume Mounts¶
Document storage paths are configured in docker-compose.yml
:
services:
backend:
volumes:
# Document storage (NOT configurable via env vars)
- ./maestro_backend/data:/app/data
# Model caches
- ./maestro_model_cache:/root/.cache/huggingface
- ./maestro_datalab_cache:/root/.cache/datalab
# Reports
- ./reports:/app/reports
Storage Path Configuration
Document storage paths are set in docker-compose.yml:
- Raw files:
./maestro_backend/data/raw_files/
- Markdown:
./maestro_backend/data/markdown_files/
GPU Configuration¶
For multi-GPU systems, assign different GPUs to services:
# In docker-compose.yml
services:
backend:
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0'] # Use GPU 0
doc-processor:
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['1'] # Use GPU 1
CPU-Only Mode¶
For systems without NVIDIA GPUs:
# Method 1: Use CPU-only compose file
docker compose -f docker-compose.cpu.yml up -d
# Method 2: Set environment variable
FORCE_CPU_MODE=true
In-App Configuration¶
These settings are configured through the web interface after installation.
AI Provider Setup¶
Location: Settings → AI Config
-
Choose provider:
- OpenRouter (100+ models)
- OpenAI
- Anthropic
- Local LLM
- Custom endpoint
-
Configure API credentials
- Select models for different tasks:
- Fast model (quick operations)
- Mid model (balanced)
- Intelligent model (complex tasks)
- Verifier model (fact-checking)
See AI Providers Guide for detailed setup.
Search Provider Setup¶
Location: Settings → Search
Available providers:
- Tavily - AI-optimized search
- LinkUp - Real-time comprehensive search
- Jina - Advanced content extraction
- SearXNG - Privacy-focused, self-hosted
See Search Providers Guide for comparison.
Web Fetch Configuration¶
Location: Settings → Web Fetch
Configure how MAESTRO fetches full content from web pages:
- Original + Jina Fallback - Best balance (recommended)
- Original - Fast, free, but limited
- Jina Reader - Advanced JavaScript rendering
See Web Fetch Guide for detailed setup.
Research Configuration¶
Location: Settings → Research
Configurable parameters:
-
Performance:
- Concurrent Requests (overrides MAX_CONCURRENT_REQUESTS)
- Search depth
- Result counts
-
Quality:
- Research iterations
- Question generation depth
- Writing passes
- Verification settings
-
Presets:
- Quick (fast, surface-level)
- Standard (balanced)
- Deep (thorough, slower)
- Custom
Per-Mission Settings
Research settings can be overridden for individual missions/tasks, allowing fine-tuned control for specific research needs.
User Management¶
Location: Settings → Users (Admin only)
- Create/modify user accounts
- Set permissions
- Configure quotas
- View usage statistics
Security Best Practices¶
-
Immediate Actions:
- Change default passwords
- Generate secure JWT secret
- Disable CORS wildcard in production
-
Production Deployment:
-
Secure Password Generation:
Troubleshooting¶
Configuration Not Applied¶
After changing .env
:
Verify Settings¶
Check loaded configuration:
# View environment variables
docker compose config
# Check specific service
docker exec maestro-backend env | grep MAX_WORKER
Common Issues¶
-
Permission Denied:
-
Port Conflicts:
-
GPU Not Detected:
- Check NVIDIA drivers (need 575+ for CUDA 12.9)
- Verify Docker GPU support
- Set FORCE_CPU_MODE=true as fallback
Configuration Files Reference¶
File | Purpose | Location |
---|---|---|
.env | Environment variables | Project root |
docker-compose.yml | Service definitions | Project root |
docker-compose.cpu.yml | CPU-only configuration | Project root |
nginx/nginx.conf | Reverse proxy settings | ./nginx/ |
User settings | UI configurations | PostgreSQL database |
Next Steps¶
- Environment Variables Reference - Complete list
- AI Providers - LLM configuration
- Search Providers - Web search setup
- Web Fetch Configuration - Content fetching setup
- First Login - Initial setup guide