Skip to main content

Prerequisites

Before deploying Scout on-premise, you need:
  • Kubernetes cluster with kubectl access
  • OpenAI-compatible API (e.g. LiteLLM) serving an Anthropic Claude model
  • SYNQ OAuth credentials from on-premise integration setup:
    • SYNQ_CLIENT_ID
    • SYNQ_CLIENT_SECRET

Deployment

All deployment configurations, detailed instructions, and examples are available in our official repository: getsynq/synq-scout-k8s The repository provides complete deployment guidance including:
  • Kubernetes configurations
  • Environment setup instructions
  • Sample deployment files
  • Troubleshooting guides

Configuration

During your on-premise integration setup in SYNQ, you’ll receive OAuth credentials that are required for deployment:
  • SYNQ_CLIENT_ID
  • SYNQ_CLIENT_SECRET
These credentials authenticate Scout with SYNQ’s services and must be configured in your deployment.

LLM Model Configuration

Scout uses two model roles — a thinking model (for reasoning and triage) and a summary model (for generating summaries). Both default to Claude Sonnet when not explicitly configured. You can override the models using environment variables:
VariableDescription
SYNQ_THINKING_MODELModel used for reasoning and triage
SYNQ_SUMMARY_MODELModel used for generating summaries
OPENAI_MODELFallback model for both roles if the above are not set
The model names should match what your OpenAI-compatible API expects (e.g. the model name configured in LiteLLM). To see the list of currently supported models and their stable/latest versions, go to Settings → Scout AI → Models in the SYNQ platform.

Support

For deployment assistance: On-premise deployment provides complete control over your data while maintaining Scout’s full functionality and enterprise security requirements.