DocsCore ConceptsMulti-LLM Orchestration

Multi-LLM Orchestration

Learn how to use multiple LLMs for optimal performance.

Lattice's Multi-LLM Orchestration allows you to use different language models for different tasks, optimizing for cost, speed, and capability.

Why Multi-LLM?

  • Cost Optimization - Use cheaper models for simple tasks
  • Speed - Use faster models for time-sensitive operations
  • Capability - Use specialized models for specific domains
  • Reliability - Fallback to alternative models on failures

Configuration

javascript
const agent = new Agent({
  name: 'SmartRouter',
  models: {
    default: 'gpt-4-turbo',
    fast: 'gpt-3.5-turbo',
    code: 'claude-3-opus',
    vision: 'gpt-4-vision'
  },
  routing: {
    codeGeneration: 'code',
    quickResponses: 'fast',
    imageAnalysis: 'vision'
  }
});

Lattice automatically handles model switching and maintains context across different LLMs.