DocsConfigurationLlm providers

LLM Providers

Quell's rule engine handles structured specs (docstrings, Pydantic models) deterministically — no LLM required. LLM is only called for:

  • quell reproduce — converting natural language bug descriptions into tests
  • Complex or ambiguous specs that the rule engine can't parse

If you only use quell check and quell score, you don't need to configure an LLM at all.

Anthropic (Claude)

Best for: High-quality test generation, complex specs.

export ANTHROPIC_API_KEY=sk-ant-...
# pyproject.toml
[tool.quell]
llm_provider = "anthropic"
llm_model = "claude-sonnet-4-5"   # or claude-opus-4-7

Supported models: any claude-* model. claude-sonnet-4-5 is recommended for a good balance of quality and cost.

OpenAI (GPT-4)

export OPENAI_API_KEY=sk-...
[tool.quell]
llm_provider = "openai"
llm_model = "gpt-4o"

Ollama (local, offline)

Run LLM inference locally — no API key, no data leaves your machine.

# Install Ollama: https://ollama.ai
ollama pull codellama
[tool.quell]
llm_provider = "ollama"
llm_model = "codellama"
ollama_base_url = "http://localhost:11434"

For Python code, codellama or deepseek-coder tend to produce better tests than general models.

What the LLM receives

When Quell calls the LLM, it sends only:

  1. The requirement description and constraint kind
  2. The target function's source code
  3. The spec text (docstring or bug description)
  4. Clear instructions: write a pytest test that PASSES on correct code and FAILS on violated code

What is NOT sent:

  • Other files in your project
  • Environment variables or secrets
  • Database schemas or configuration

Cost estimates

LLM is called at most once per uncovered requirement (and only if the rule engine can't handle it).

ProviderModelCost per generated test
Anthropicclaude-sonnet-4-5~$0.001
OpenAIgpt-4o~$0.002
Ollamalocal$0