Models
OpenSploit supports 75+ LLM providers through the AI SDK. Configure providers and select models to power your security assessments.
Provider Setup
Before using a model, configure the provider via /connect:
/connect
Select your provider and enter your API key.
Model Selection
Select a model with the /models command:
/models
Or use the keybind ctrl+x m to open the model selector.
Recommended Models
For security testing, we recommend models with strong reasoning and tool-calling capabilities:
| Model | Provider | Best For | |-------|----------|----------| | Claude Opus 4.5 | Anthropic | Complex reasoning, exploitation | | Claude Sonnet 4 | Anthropic | Balanced performance | | GPT-4o | OpenAI | General purpose | | Gemini Pro | Google | Fast enumeration |
Only models with strong tool-calling abilities work well with OpenSploit's security tools. Smaller or older models may struggle with complex multi-step attacks.
Default Model
Set a default model in opensploit.json:
{
"model": "anthropic/claude-sonnet-4-20250514"
}
Format: provider_id/model_id
Model Configuration
Configure model-specific settings:
{
"model": "anthropic/claude-sonnet-4-20250514",
"models": {
"anthropic/claude-sonnet-4-20250514": {
"temperature": 0.7,
"maxTokens": 8192
}
}
}
Variants
Variants are different configurations for the same model. Use them for different tasks:
{
"models": {
"anthropic/claude-sonnet-4-20250514": {
"variants": {
"default": {
"temperature": 0.7
},
"precise": {
"temperature": 0.2
},
"creative": {
"temperature": 0.9
}
}
}
}
}
Cycle between variants with ctrl+t.
Local Models
Run models locally with Ollama:
-
Install Ollama:
curl -fsSL https://ollama.ai/install.sh | sh -
Pull a model:
ollama pull llama3.2 ollama pull codellama -
Connect in OpenSploit:
/connectSelect "Ollama" and configure.
-
Set as default:
{ "model": "ollama/llama3.2" }
Model Priority
At startup, OpenSploit selects models in this order:
- Command-line flag (
--modelor-m) - Config file specification
- Last used model
- First available model
Per-Agent Models
Assign different models to different agents:
{
"agents": {
"recon": {
"model": "anthropic/claude-haiku-3"
},
"exploit": {
"model": "anthropic/claude-opus-4"
}
}
}
Use faster models for simple tasks, powerful models for complex reasoning.
Token Limits
Be aware of context limits when working with large scan outputs:
| Model | Context Window | |-------|----------------| | Claude Opus 4.5 | 200K tokens | | Claude Sonnet 4 | 200K tokens | | GPT-4o | 128K tokens | | Llama 3.2 | 128K tokens |
Use /compact to summarize long sessions and free up context.