Skip to main content
Automatically route to the right model based on task complexity. Trained on millions of vibecoding prompts to understand when to use cheap vs. powerful models. Save costs and improve conversion rates by routing to the right model for each task.
Router Performance

Quick Start

import { MorphClient } from '@morphllm/morphsdk';
import Anthropic from '@anthropic-ai/sdk';

const morph = new MorphClient({ apiKey: process.env.MORPH_API_KEY });
const anthropic = new Anthropic();

// Router picks the right model
const { model } = await morph.routers.anthropic.selectModel({
  input: 'Add error handling to this function'
});

// Use it
const response = await anthropic.messages.create({
  model, // claude-haiku-4-5-20251001 (cheap) for simple tasks
  messages: [{ role: 'user', content: '...' }]
});
Latency: ~430ms average, runs in parallel with your request preparation.

Model Selection

The router returns just the model name. Use it directly with your provider’s SDK:
const { model } = await morph.routers.anthropic.selectModel({
  input: userQuery
});
// Returns: { model: "claude-sonnet-4-5-20250929" }

Available Models

ProviderFast/CheapPowerful
Anthropicclaude-haiku-4-5-20251001claude-sonnet-4-5-20250929
OpenAIgpt-5-minigpt-5-low, gpt-5-medium, gpt-5-high
Geminigemini-2.5-flashgemini-2.5-pro

Modes

balanced (default) - Balances cost and quality aggressive - Aggressively optimizes for cost (cheaper models)
// Most use cases
await morph.routers.openai.selectModel({
  input: userQuery,
  mode: 'balanced' 
});

// When cost is critical
await morph.routers.openai.selectModel({
  input: userQuery,
  mode: 'aggressive' // Uses cheaper models
});

Real-World Example

Route dynamically in production to cut costs while maintaining quality:
import { MorphClient } from '@morphllm/morphsdk';
import OpenAI from 'openai';

const morph = new MorphClient({ apiKey: process.env.MORPH_API_KEY });
const openai = new OpenAI();

async function handleUserRequest(userInput: string) {
  // Router analyzes complexity (~430ms)
  const { model } = await morph.routers.openai.selectModel({
    input: userInput
  });

  // Use the selected model
  return await openai.chat.completions.create({
    model,
    messages: [{ role: 'user', content: userInput }]
  });
}

// Simple: "Add a TODO comment" → gpt-5-mini
// Complex: "Design event sourcing system" → gpt-5-high

When to Use

Use router when:
  • Processing varied user requests (simple to complex)
  • You want to minimize API costs automatically
  • Building cost-conscious AI products
Skip router when:
  • All tasks need the same model tier
  • The ~430ms routing latency matters more than cost savings
  • You need maximum predictability

API Reference

const { model } = await morph.routers.{provider}.selectModel({
  input: string,     // Your task description
  mode?: 'balanced' | 'aggressive'  // Default: balanced
});

// Returns: { model: string }
Providers: openai | anthropic | gemini

Error Handling

Always provide a fallback model:
let model = 'claude-haiku-4-5-20251001'; // Fallback

try {
  const result = await morph.routers.anthropic.selectModel({
    input: userInput
  });
  model = result.model;
} catch (error) {
  console.error('Router failed, using fallback');
}

// Use model (either selected or fallback)
await anthropic.messages.create({ model, ... });

Performance

  • Latency: ~430ms average
  • Parallel: Run routing while preparing your request
  • HTTP/2: Connection reuse for subsequent calls
// Run in parallel to save time
const [routerResult, userData] = await Promise.all([
  morph.routers.openai.selectModel({ input: userQuery }),
  fetchUserData(userId)
]);

await openai.chat.completions.create({
  model: routerResult.model,
  messages: [{ role: 'user', content: userData }]
});