Morph + Vercel AI SDK: Streaming Code Edits

The Vercel AI SDK provides powerful primitives for building AI-powered applications. When combined with Morph’s blazing-fast apply model through the Vercel AI Gateway, you can create real-time code editing experiences that feel instant and work with any model.

Why AI SDK + AI Gateway + Morph?

Universal Model Access: Access ~100 AI models including Morph through a single gateway without managing API keys Streaming Performance: Stream code edits at 4500+ tokens/second directly to your UI React Integration: Built-in hooks for managing streaming state and UI updates
Type Safety: Full TypeScript support with proper typing for streaming responses Real-time UX: Show edits being applied character-by-character for immediate feedback Easy Model Switching: Switch between any model (including Morph) without code changes

Quick Setup

Install the AI SDK 5 beta and configure it to work with the Vercel AI Gateway:
npm install ai@beta
The AI Gateway handles authentication and model routing automatically - no API keys needed!

Basic Streaming Implementation with AI Gateway

import { streamText } from 'ai';

export const maxDuration = 30;

export async function POST(req: Request) {
  const { originalCode, updateSnippet, model = 'morph/morph-v3-large' } = await req.json();

  const result = await streamText({
    // Use the selected model through AI Gateway
    model: model,
    messages: [
      {
        role: 'user',
        content: `<code>${originalCode}</code>\n<update>${updateSnippet}</update>`
      }
    ],
  });

  return result.toAIStreamResponse();
}

Dynamic Model Switching with AI Gateway

One of the biggest advantages of the AI Gateway is the ability to switch between models dynamically without changing your code:
import { useCompletion } from 'ai/react';
import { useState, useCallback } from 'react';

interface MorphStreamOptions {
  onComplete?: (result: string) => void;
  onError?: (error: Error) => void;
  defaultModel?: string;
}

export function useMorphStream(options: MorphStreamOptions = {}) {
  const [originalCode, setOriginalCode] = useState('');
  const [appliedCode, setAppliedCode] = useState<string | null>(null);
  const [selectedModel, setSelectedModel] = useState(options.defaultModel || 'morph/morph-v3-large');
  
  const {
    completion,
    isLoading,
    error,
    complete,
    stop,
  } = useCompletion({
    api: '/api/morph',
    onFinish: (prompt, completion) => {
      setAppliedCode(completion);
      options.onComplete?.(completion);
    },
    onError: options.onError,
  });

  const applyEdit = useCallback(async (code: string, edit: string, model?: string) => {
    setOriginalCode(code);
    setAppliedCode(null);
    
    await complete('', {
      body: {
        originalCode: code,
        updateSnippet: edit,
        model: model || selectedModel, // Allow model override
      },
    });
  }, [complete, selectedModel]);

  const acceptChanges = useCallback(() => {
    if (appliedCode) {
      setOriginalCode(appliedCode);
      setAppliedCode(null);
    }
  }, [appliedCode]);

  const rejectChanges = useCallback(() => {
    setAppliedCode(null);
  }, []);

  return {
    originalCode,
    streamingCode: completion,
    appliedCode,
    selectedModel,
    setSelectedModel,
    isLoading,
    error,
    applyEdit,
    acceptChanges,
    rejectChanges,
    stop,
  };
}

Updated Route Handler for Model Selection

Update your route handler to accept model selection:
app/api/morph/route.ts
import { streamText } from 'ai';

export const maxDuration = 30;

export async function POST(req: Request) {
  const { originalCode, updateSnippet, model = 'morph/morph-v3-large' } = await req.json();

  const result = await streamText({
    // Use the selected model through AI Gateway
    model: model,
    messages: [
      {
        role: 'user',
        content: `<code>${originalCode}</code>\n<update>${updateSnippet}</update>`
      }
    ],
  });

  return result.toAIStreamResponse();
}

Batch Processing with Multiple Models

Compare results across different models easily:
app/api/morph/compare/route.ts
import { streamText } from 'ai';

export const maxDuration = 60;

export async function POST(req: Request) {
  const { originalCode, updateSnippet, models = ['morph/morph-v3-large', 'anthropic/claude-3-5-sonnet-20241022', 'openai/gpt-4o'] } = await req.json();

  const results = await Promise.all(
    models.map(async (modelId) => {
      const result = await streamText({
        model: modelId,
        messages: [
          {
            role: 'user',
            content: `<code>${originalCode}</code>\n<update>${updateSnippet}</update>`
          }
        ],
      });
      
      return {
        model: modelId,
        result: await result.text,
      };
    })
  );

  return Response.json({ results });
}

Real-time Collaborative Editing

Integrate with WebSockets for multi-user editing across different models:
components/CollaborativeEditor.tsx
'use client';

import { useCompletion } from 'ai/react';
import { useEffect, useState } from 'react';
import { io } from 'socket.io-client';

export function CollaborativeEditor({ roomId }: { roomId: string }) {
  const [socket, setSocket] = useState<any>(null);
  const [collaborators, setCollaborators] = useState<string[]>([]);
  const [sharedCode, setSharedCode] = useState('');
  const [selectedModel, setSelectedModel] = useState('morph/morph-v3-large');

  const { completion, isLoading, complete } = useCompletion({
    api: '/api/morph',
    onFinish: (prompt, completion) => {
      // Broadcast completed edit to all collaborators
      socket?.emit('code-updated', {
        roomId,
        code: completion,
        model: selectedModel,
        timestamp: Date.now(),
      });
    },
  });

  useEffect(() => {
    const newSocket = io(process.env.NEXT_PUBLIC_SOCKET_URL!);
    setSocket(newSocket);

    newSocket.emit('join-room', roomId);

    newSocket.on('collaborator-joined', (collaborators) => {
      setCollaborators(collaborators);
    });

    newSocket.on('code-updated', ({ code, model, timestamp }) => {
      setSharedCode(code);
      console.log(`Code updated using ${model} at ${timestamp}`);
    });

    newSocket.on('edit-started', ({ userId, instruction, model }) => {
      console.log(`${userId} started editing with ${model}: ${instruction}`);
    });

    return () => newSocket.close();
  }, [roomId]);

  const applySharedEdit = async (instruction: string) => {
    // Notify collaborators that an edit is starting
    socket?.emit('edit-started', {
      roomId,
      instruction,
      model: selectedModel,
      timestamp: Date.now(),
    });

    await complete('', {
      body: {
        originalCode: sharedCode,
        updateSnippet: instruction,
        model: selectedModel,
      },
    });
  };

  return (
    <div className="h-screen flex flex-col p-4">
      {/* Collaboration Header */}
      <div className="flex justify-between items-center mb-4 p-3 bg-gray-50 rounded-lg">
        <div>
          <h2 className="font-semibold">Room: {roomId}</h2>
          <p className="text-sm text-gray-600">
            {collaborators.length} collaborator(s) online
          </p>
        </div>
        
        <div className="flex items-center gap-4">
          <select
            value={selectedModel}
            onChange={(e) => setSelectedModel(e.target.value)}
            className="px-3 py-1 border rounded text-sm"
          >
            <option value="morph/morph-v3-large">Morph v3 Large</option>
            <option value="anthropic/claude-3-5-sonnet-20241022">Claude 3.5 Sonnet</option>
            <option value="openai/gpt-4o">GPT-4o</option>
            <option value="xai/grok-3">Grok 3</option>
          </select>
          
          {isLoading && (
            <div className="text-blue-600 flex items-center">
              <div className="animate-spin h-4 w-4 border-2 border-blue-600 border-t-transparent rounded-full mr-2"></div>
              Applying edit...
            </div>
          )}
        </div>
      </div>

      {/* Shared Code Editor */}
      <div className="flex-1 grid grid-cols-2 gap-4">
        <div>
          <h3 className="text-lg font-semibold mb-2">Shared Code</h3>
          <pre className="h-full p-3 border rounded-lg font-mono text-sm bg-white overflow-auto">
            {isLoading ? completion : sharedCode || 'Start coding together...'}
          </pre>
        </div>

        <div>
          <h3 className="text-lg font-semibold mb-2">Apply Edit</h3>
          <EditInstructionInput onSubmit={applySharedEdit} disabled={isLoading} />
        </div>
      </div>
    </div>
  );
}

Performance Tips

1. Model-Specific Optimization

// Choose the best model for each task
const modelForTask = (task: string) => {
  switch (task) {
    case 'code-edit':
      return 'morph/morph-v3-large'; // Best for code edits
    case 'documentation':
      return 'anthropic/claude-3-5-sonnet-20241022'; // Best for writing
    case 'quick-fix':
      return 'openai/gpt-4o-mini'; // Fastest for simple tasks
    default:
      return 'morph/morph-v3-large';
  }
};

2. Smart Model Fallbacks

const applyEditWithFallback = async (code: string, instruction: string) => {
  const models = ['morph/morph-v3-large', 'anthropic/claude-3-5-sonnet-20241022', 'openai/gpt-4o'];
  
  for (const model of models) {
    try {
      const result = await streamText({
        model,
        messages: [{ role: 'user', content: `<code>${code}</code>\n<update>${instruction}</update>` }],
      });
      return result;
    } catch (error) {
      console.log(`Model ${model} failed, trying next...`);
    }
  }
  
  throw new Error('All models failed');
};

3. Caching with Model Awareness

import { kv } from '@vercel/kv';

const getCachedEditOrGenerate = async (code: string, instruction: string, model: string) => {
  const key = `edit:${model}:${hashCode(code)}:${hashCode(instruction)}`;
  
  const cached = await kv.get(key);
  if (cached) return cached;
  
  const result = await streamText({
    model,
    messages: [{ role: 'user', content: `<code>${code}</code>\n<update>${instruction}</update>` }],
  });
  
  const text = await result.text;
  await kv.set(key, text, { ex: 3600 }); // Cache for 1 hour
  return text;
};

Error Handling & Recovery

const { completion, error, isLoading, complete } = useCompletion({
  api: '/api/morph',
  onError: (error) => {
    // Graceful error handling
    toast.error(`Edit failed: ${error.message}`);
    
    // Log for debugging
    console.error('AI Gateway Error:', error);
    
    // Automatically retry with different model
    if (error.message.includes('rate_limit')) {
      retryWithDifferentModel();
    }
  },
});

Next Steps

Cost Management: Use faster/cheaper models for simple tasks, premium models for complex ones Monitoring: Track which models perform best for your specific use cases Custom Integrations: Build domain-specific editing interfaces with model specialization The AI Gateway + Morph combination gives you the ultimate flexibility to build sophisticated, model-agnostic code editing experiences that can adapt to any provider or model without code changes.
The AI Gateway is currently in alpha. For production applications, consider pinning to specific versions and implementing robust error handling and fallback strategies.