Features

AI Integration

Chat, completion, and multi-provider AI support for Nuxt Crouton

The AI package extends Nuxt Crouton with AI-powered chat and text completion functionality. Built on the Vercel AI SDK, it provides streaming chat interfaces, multi-provider support (OpenAI and Anthropic), and ready-to-use Vue components.

Overview

Package Information

  • Package: @friendlyinternet/nuxt-crouton-ai
  • Version: 0.1.0
  • Type: Nuxt Layer / Addon Package
  • Repository: nuxt-crouton monorepo

What's Included

Composables (3):

  • useChat() - Streaming chat with conversation history
  • useCompletion() - Single-turn text completion
  • useAIProvider() - Provider and model configuration

Components (3):

  • AIChatbox - Complete chat interface with messages and input
  • AIMessage - Individual message bubble component
  • AIInput - Message input with send button

Server Utilities:

  • createAIProvider() - Server-side provider factory
  • Auto-detection for provider from model ID
  • Streaming and non-streaming response support

Integration:

  • Chat conversations schema for persistence
  • Multi-provider support (OpenAI, Anthropic)
  • Team context integration (when using crouton-auth)

Key Features

  • Real-time Streaming - Messages stream token-by-token for responsive UX
  • Multi-Provider - Switch between OpenAI and Anthropic seamlessly
  • Auto-Detection - Provider auto-detected from model ID (gpt-* → OpenAI, claude-* → Anthropic)
  • Ready-to-Use Components - Drop-in chat interface components
  • Conversation Persistence - Schema for storing chat history
  • Team Integration - Automatic team scoping when using crouton-auth
  • Type-Safe - Full TypeScript support with exported types

Installation

Prerequisites

Before installing, ensure you have:

  • Nuxt 4.0+
  • @friendlyinternet/nuxt-crouton installed
  • An API key for at least one provider (OpenAI or Anthropic)

Install Package

pnpm add @friendlyinternet/nuxt-crouton-ai

Configure Nuxt

Add the AI layer to your nuxt.config.ts:

export default defineNuxtConfig({
  extends: [
    '@friendlyinternet/nuxt-crouton',
    '@friendlyinternet/nuxt-crouton-ai'
  ],
  runtimeConfig: {
    // Server-side (private)
    openaiApiKey: '',      // Set via NUXT_OPENAI_API_KEY
    anthropicApiKey: '',   // Set via NUXT_ANTHROPIC_API_KEY

    // Client-side (public)
    public: {
      croutonAI: {
        defaultProvider: 'openai',
        defaultModel: 'gpt-4o'
      }
    }
  }
})

Environment Variables

Create or update your .env file:

# For OpenAI models (gpt-4o, gpt-4-turbo, o1, etc.)
NUXT_OPENAI_API_KEY=sk-...

# For Anthropic models (claude-sonnet-4, claude-opus-4, etc.)
NUXT_ANTHROPIC_API_KEY=sk-ant-...
You only need API keys for providers you plan to use. If you only use OpenAI models, you don't need an Anthropic key.

Quick Start

Basic Chat

Create a simple chat page:

<template>
  <div class="max-w-2xl mx-auto p-4">
    <h1 class="text-2xl font-bold mb-4">AI Chat</h1>
    <div class="h-[600px]">
      <AIChatbox
        system-prompt="You are a helpful assistant."
        placeholder="Ask me anything..."
      />
    </div>
  </div>
</template>

Custom Chat Implementation

For more control, use the useChat composable directly:

<template>
  <div class="space-y-4">
    <!-- Messages -->
    <div v-for="message in messages" :key="message.id" class="p-3 rounded-lg"
         :class="message.role === 'user' ? 'bg-blue-100 ml-12' : 'bg-gray-100 mr-12'">
      <p class="text-sm font-medium">{{ message.role }}</p>
      <p>{{ message.content }}</p>
    </div>

    <!-- Input -->
    <form @submit.prevent="handleSubmit" class="flex gap-2">
      <input
        v-model="input"
        placeholder="Type a message..."
        class="flex-1 px-4 py-2 border rounded-lg"
        :disabled="isLoading"
      />
      <button
        type="submit"
        :disabled="isLoading || !input.trim()"
        class="px-4 py-2 bg-blue-500 text-white rounded-lg disabled:opacity-50"
      >
        {{ isLoading ? 'Sending...' : 'Send' }}
      </button>
    </form>
  </div>
</template>

<script setup lang="ts">
const { messages, input, handleSubmit, isLoading } = useChat({
  model: 'gpt-4o',
  systemPrompt: 'You are a helpful assistant.',
  onFinish: (message) => {
    console.log('Response complete:', message.content)
  },
  onError: (error) => {
    console.error('Chat error:', error)
  }
})
</script>

Composables

useChat()

The primary composable for chat functionality. Wraps the Vercel AI SDK's useChat with Crouton-specific defaults.

Options

interface AIChatOptions {
  /** API endpoint for chat (default: '/api/ai/chat') */
  api?: string
  /** Unique identifier for the chat session */
  id?: string
  /** Provider to use (e.g., 'openai', 'anthropic') */
  provider?: string
  /** Model to use (e.g., 'gpt-4o', 'claude-sonnet-4-20250514') */
  model?: string
  /** System prompt to set context */
  systemPrompt?: string
  /** Initial messages to populate the chat */
  initialMessages?: AIMessage[]
  /** Initial input value */
  initialInput?: string
  /** Additional body parameters to send with each request */
  body?: Record<string, unknown>
  /** Additional headers to send with each request */
  headers?: Record<string, string> | Headers
  /** Credentials mode for fetch requests */
  credentials?: 'omit' | 'same-origin' | 'include'
  /** Callback when a message is complete */
  onFinish?: (message: AIMessage) => void
  /** Callback when an error occurs */
  onError?: (error: Error) => void
  /** Callback when a response is received */
  onResponse?: (response: Response) => void | Promise<void>
}

Returns

{
  // Core state
  messages: ComputedRef<AIMessage[]>     // Conversation history
  input: Ref<string>                     // Current input value
  isLoading: ComputedRef<boolean>        // Whether request is in progress
  error: Ref<Error | undefined>          // Current error state
  status: Ref<'idle' | 'streaming' | 'submitted'>

  // Actions
  handleSubmit: () => void               // Submit current input
  stop: () => void                       // Stop streaming response
  reload: () => void                     // Regenerate last response
  append: (message: AIMessage) => void   // Add message to history
  setMessages: (messages: AIMessage[]) => void

  // Crouton helpers
  clearMessages: () => void              // Clear all messages
  exportMessages: () => AIMessage[]      // Export for persistence
  importMessages: (messages: AIMessage[]) => void  // Restore messages
}

Usage Examples

With System Prompt:

const { messages, input, handleSubmit, isLoading } = useChat({
  model: 'gpt-4o',
  systemPrompt: `You are an expert customer support agent for Acme Corp.
    Be helpful, friendly, and concise.
    Always greet customers by name when provided.`
})

With Initial Messages:

const { messages, handleSubmit } = useChat({
  initialMessages: [
    { id: '1', role: 'assistant', content: 'Hello! How can I help you today?' }
  ]
})

With Callbacks:

const { messages, handleSubmit } = useChat({
  onFinish: (message) => {
    // Save to database, track analytics, etc.
    saveConversation(messages.value)
  },
  onError: (error) => {
    toast.add({
      title: 'Error',
      description: error.message,
      color: 'red'
    })
  }
})

useCompletion()

For single-turn text completion without conversation history.

Options

interface AICompletionOptions {
  /** API endpoint for completion (default: '/api/ai/completion') */
  api?: string
  /** Provider to use */
  provider?: string
  /** Model to use */
  model?: string
  /** Additional body parameters */
  body?: Record<string, unknown>
  /** Additional headers */
  headers?: Record<string, string> | Headers
  /** Credentials mode */
  credentials?: 'omit' | 'same-origin' | 'include'
  /** Callback when completion is finished */
  onFinish?: (completion: string) => void
  /** Callback when an error occurs */
  onError?: (error: Error) => void
}

Returns

{
  completion: Ref<string>       // Current completion text
  complete: (prompt: string) => Promise<void>  // Trigger completion
  input: Ref<string>            // Input value
  isLoading: Ref<boolean>       // Loading state
  error: Ref<Error | undefined> // Error state
  stop: () => void              // Stop generation
  setCompletion: (text: string) => void

  // Crouton helpers
  clearCompletion: () => void   // Clear completion text
}

Usage Example

<template>
  <div class="space-y-4">
    <UTextarea v-model="textToSummarize" placeholder="Paste text to summarize..." />
    <UButton @click="summarize" :loading="isLoading">
      Summarize
    </UButton>
    <div v-if="completion" class="p-4 bg-gray-100 rounded-lg">
      <h3 class="font-medium mb-2">Summary:</h3>
      <p>{{ completion }}</p>
    </div>
  </div>
</template>

<script setup lang="ts">
const textToSummarize = ref('')
const { completion, complete, isLoading } = useCompletion({
  model: 'gpt-4o-mini'  // Use faster model for summaries
})

const summarize = async () => {
  await complete(`Please summarize the following text in 2-3 sentences:\n\n${textToSummarize.value}`)
}
</script>

useAIProvider()

Access provider configuration and model information.

Returns

{
  /** Default provider from config */
  defaultProvider: ComputedRef<string>
  /** Default model from config */
  defaultModel: ComputedRef<string>
  /** List of all available providers */
  providers: AIProvider[]
  /** Model information by ID */
  models: Record<string, AIModel>

  // Helper functions
  getProvider: (providerId: string) => AIProvider | undefined
  getModel: (modelId: string) => AIModel | undefined
  getModelsForProvider: (providerId: string) => AIModel[]
  isModelFromProvider: (modelId: string, providerId: string) => boolean
  detectProviderFromModel: (modelId: string) => string | undefined
}

Usage Example

<template>
  <div class="space-y-4">
    <UFormField label="Provider">
      <USelectMenu v-model="selectedProvider" :items="providers" value-key="id" label-key="name" />
    </UFormField>

    <UFormField label="Model">
      <USelectMenu v-model="selectedModel" :items="availableModels" value-key="id" label-key="name" />
    </UFormField>
  </div>
</template>

<script setup lang="ts">
const { providers, getModelsForProvider, defaultProvider, defaultModel } = useAIProvider()

const selectedProvider = ref(defaultProvider.value)
const selectedModel = ref(defaultModel.value)

const availableModels = computed(() =>
  getModelsForProvider(selectedProvider.value)
)
</script>

Components

All components are auto-imported with the AI prefix.

AIChatbox

Complete chat interface with messages area, error handling, and input.

Props

interface AIChatboxProps {
  /** API endpoint for chat (default: '/api/ai/chat') */
  api?: string
  /** System prompt to set context */
  systemPrompt?: string
  /** Placeholder text for input */
  placeholder?: string
  /** Message shown when there are no messages */
  emptyMessage?: string
  /** Provider to use */
  provider?: string
  /** Model to use */
  model?: string
  /** Initial messages */
  initialMessages?: AIMessage[]
}

Events

@finish: (message: AIMessage) => void  // Emitted when response completes
@error: (error: Error) => void         // Emitted on errors

Exposed Methods

The component exposes its internal state for programmatic control:

<template>
  <div>
    <AIChatbox ref="chatbox" />
    <UButton @click="clearChat">Clear Chat</UButton>
  </div>
</template>

<script setup lang="ts">
const chatbox = ref()

const clearChat = () => {
  chatbox.value?.clearMessages()
}
</script>

Usage

Basic:

<template>
  <div class="h-[600px]">
    <AIChatbox system-prompt="You are a helpful coding assistant." />
  </div>
</template>

With Custom Model:

<template>
  <AIChatbox
    model="claude-sonnet-4-20250514"
    system-prompt="You are Claude, a helpful AI assistant."
    placeholder="Chat with Claude..."
  />
</template>

AIMessage

Individual message bubble component.

Props

interface AIMessageProps {
  /** The message to display */
  message: AIMessage
  /** Whether this message is currently streaming */
  isStreaming?: boolean
}

Usage

<template>
  <div class="space-y-4">
    <AIMessage
      v-for="message in messages"
      :key="message.id"
      :message="message"
      :is-streaming="isLoading && message === messages[messages.length - 1]"
    />
  </div>
</template>

AIInput

Message input with send button.

Props

interface AIInputProps {
  /** Current input value (v-model) */
  modelValue?: string
  /** Whether the input is in loading state */
  loading?: boolean
  /** Placeholder text */
  placeholder?: string
  /** Whether the input is disabled */
  disabled?: boolean
}

Events

@update:modelValue: (value: string) => void
@submit: () => void  // Emitted when user submits (Enter or click)

Usage

<template>
  <AIInput
    v-model="input"
    :loading="isLoading"
    placeholder="Type your message..."
    @submit="handleSubmit"
  />
</template>

Server Usage

Creating AI Endpoints

The package provides a createAIProvider() factory for server-side AI operations.

Basic Chat Endpoint

// server/api/ai/chat.post.ts
import { createAIProvider } from '@friendlyinternet/nuxt-crouton-ai/server'
import { streamText } from 'ai'

export default defineEventHandler(async (event) => {
  const { messages, model } = await readBody(event)
  const ai = createAIProvider(event)

  const result = await streamText({
    model: ai.model(model || 'gpt-4o'),
    messages
  })

  return result.toDataStreamResponse()
})

With System Prompt

// server/api/ai/chat.post.ts
import { createAIProvider } from '@friendlyinternet/nuxt-crouton-ai/server'
import { streamText } from 'ai'

export default defineEventHandler(async (event) => {
  const { messages, model, systemPrompt } = await readBody(event)
  const ai = createAIProvider(event)

  // Prepend system message if provided
  const allMessages = systemPrompt
    ? [{ role: 'system', content: systemPrompt }, ...messages]
    : messages

  const result = await streamText({
    model: ai.model(model || 'gpt-4o'),
    messages: allMessages
  })

  return result.toDataStreamResponse()
})

Non-Streaming Response

// server/api/ai/generate.post.ts
import { createAIProvider } from '@friendlyinternet/nuxt-crouton-ai/server'
import { generateText } from 'ai'

export default defineEventHandler(async (event) => {
  const { prompt, model } = await readBody(event)
  const ai = createAIProvider(event)

  const result = await generateText({
    model: ai.model(model || 'gpt-4o'),
    prompt
  })

  return { text: result.text }
})

Provider Auto-Detection

The ai.model() function automatically detects the provider from the model ID:

const ai = createAIProvider(event)

// OpenAI models
ai.model('gpt-4o')           // → Uses OpenAI
ai.model('gpt-4-turbo')      // → Uses OpenAI
ai.model('o1')               // → Uses OpenAI
ai.model('o1-mini')          // → Uses OpenAI
ai.model('o3-mini')          // → Uses OpenAI

// Anthropic models
ai.model('claude-sonnet-4-20250514')    // → Uses Anthropic
ai.model('claude-opus-4-20250514')      // → Uses Anthropic
ai.model('claude-3-5-sonnet-20241022')  // → Uses Anthropic

Accessing Providers Directly

For advanced use cases, access providers directly:

const ai = createAIProvider(event)

// Get OpenAI provider
const openai = ai.openai()
const gpt4 = openai('gpt-4o')

// Get Anthropic provider
const anthropic = ai.anthropic()
const claude = anthropic('claude-sonnet-4-20250514')

Providers

OpenAI

Supported Models:

ModelDescription
gpt-4oMost capable, great for complex tasks
gpt-4o-miniFast and cost-effective for simpler tasks
gpt-4-turboHigh capability with larger context window
o1Advanced reasoning model for complex problems
o1-miniFast reasoning model

Configuration:

NUXT_OPENAI_API_KEY=sk-...

Anthropic

Supported Models:

ModelDescription
claude-sonnet-4-20250514Balanced performance and speed
claude-opus-4-20250514Most capable Anthropic model
claude-3-5-sonnet-20241022Previous generation, reliable performance

Configuration:

NUXT_ANTHROPIC_API_KEY=sk-ant-...

Conversation Persistence

The package includes a JSON schema for generating a chat conversations collection with the Crouton generator.

Generate Chat Conversations Collection

pnpm crouton generate core chatConversations \
  --fields-file=node_modules/@friendlyinternet/nuxt-crouton-ai/schemas/chat-conversations.json \
  --dialect=sqlite

This creates a collection with these fields:

FieldTypeDescription
titlestringConversation title
messagesjsonArray of chat messages
providerstringAI provider used
modelstringModel identifier
systemPrompttextSystem prompt used
metadatajsonAdditional metadata
messageCountnumberCached message count
lastMessageAtdateLast message timestamp

Saving Conversations

<script setup lang="ts">
const route = useRoute()
const teamId = route.params.team as string

const { messages, input, handleSubmit, exportMessages } = useChat({
  model: 'gpt-4o',
  onFinish: async (message) => {
    // Auto-save after each response
    await saveConversation()
  }
})

const conversationId = ref<string | null>(null)

const saveConversation = async () => {
  const payload = {
    title: messages.value[0]?.content.slice(0, 50) || 'New Conversation',
    messages: exportMessages(),
    provider: 'openai',
    model: 'gpt-4o',
    messageCount: messages.value.length,
    lastMessageAt: new Date()
  }

  if (conversationId.value) {
    await $fetch(`/api/teams/${teamId}/chatConversations/${conversationId.value}`, {
      method: 'PUT',
      body: payload
    })
  } else {
    const result = await $fetch(`/api/teams/${teamId}/chatConversations`, {
      method: 'POST',
      body: payload
    })
    conversationId.value = result.id
  }
}
</script>

Loading Conversations

<script setup lang="ts">
const route = useRoute()
const teamId = route.params.team as string
const conversationId = route.params.id as string

const { messages, importMessages } = useChat()

// Load existing conversation
const { data: conversation } = await useFetch(
  `/api/teams/${teamId}/chatConversations/${conversationId}`
)

// Restore messages
if (conversation.value?.messages) {
  importMessages(conversation.value.messages)
}
</script>

Types

All types are exported for use in your application:

import type {
  AIMessage,
  AIProvider,
  AIModel,
  AIChatOptions,
  AICompletionOptions,
  AIChatboxProps,
  AIMessageProps,
  AIInputProps
} from '@friendlyinternet/nuxt-crouton-ai'

AIMessage

interface AIMessage {
  /** Unique identifier for the message */
  id: string
  /** The role of the message sender */
  role: 'user' | 'assistant' | 'system'
  /** The content of the message */
  content: string
  /** When the message was created */
  createdAt?: Date
}

Best Practices

API Key Security

Never expose API keys on the client. API keys should only be used in server-side code.
// nuxt.config.ts
export default defineNuxtConfig({
  runtimeConfig: {
    // Server-only (not exposed to client)
    openaiApiKey: '',      // Set via NUXT_OPENAI_API_KEY
    anthropicApiKey: '',   // Set via NUXT_ANTHROPIC_API_KEY

    // Public (safe for client)
    public: {
      croutonAI: {
        defaultProvider: 'openai',
        defaultModel: 'gpt-4o'
      }
    }
  }
})

Rate Limiting

Implement rate limiting on your API endpoints:

// server/api/ai/chat.post.ts
import { createAIProvider } from '@friendlyinternet/nuxt-crouton-ai/server'
import { streamText } from 'ai'

export default defineEventHandler(async (event) => {
  // Get user/team from session
  const user = await requireAuth(event)

  // Check rate limit (implement your own logic)
  const { allowed, remaining } = await checkRateLimit(user.id, 'ai-chat')
  if (!allowed) {
    throw createError({
      statusCode: 429,
      message: 'Rate limit exceeded. Please try again later.'
    })
  }

  const { messages, model } = await readBody(event)
  const ai = createAIProvider(event)

  const result = await streamText({
    model: ai.model(model || 'gpt-4o'),
    messages
  })

  return result.toDataStreamResponse()
})

Error Handling

Always handle errors gracefully:

<script setup lang="ts">
const { messages, handleSubmit, error } = useChat({
  onError: (err) => {
    // Log for debugging
    console.error('Chat error:', err)

    // Show user-friendly message
    toast.add({
      title: 'Something went wrong',
      description: 'Please try again or contact support if the problem persists.',
      color: 'red'
    })
  }
})
</script>

<template>
  <div>
    <!-- Show error state -->
    <UAlert v-if="error" color="red" :title="error.message" />

    <!-- Chat interface -->
    <AIChatbox />
  </div>
</template>

Cost Optimization

Choose the right model for the task:

// For simple tasks, use mini models
const { complete } = useCompletion({
  model: 'gpt-4o-mini'  // Faster, cheaper
})

// For complex reasoning, use full models
const { messages, handleSubmit } = useChat({
  model: 'gpt-4o'  // More capable
})

// For advanced reasoning, use o1 models
const { messages, handleSubmit } = useChat({
  model: 'o1'  // Best reasoning, highest cost
})

Troubleshooting

API Key Not Working

Check environment variable names:

# Correct
NUXT_OPENAI_API_KEY=sk-...
NUXT_ANTHROPIC_API_KEY=sk-ant-...

# Wrong (missing NUXT_ prefix)
OPENAI_API_KEY=sk-...

Verify key in runtime config:

// server/api/debug.get.ts (development only!)
export default defineEventHandler((event) => {
  const config = useRuntimeConfig()
  return {
    hasOpenAI: !!config.openaiApiKey,
    hasAnthropic: !!config.anthropicApiKey
  }
})

Streaming Not Working

Ensure your API endpoint returns a data stream response:

// Correct
return result.toDataStreamResponse()

// Wrong - returns object, not stream
return { text: result.text }

Messages Not Displaying

Check that your messages have the correct structure:

// Correct
const message: AIMessage = {
  id: 'unique-id',
  role: 'user',  // or 'assistant' or 'system'
  content: 'Hello!'
}

// Wrong - missing required fields
const message = {
  text: 'Hello!'
}

TypeScript Errors

Run typecheck after adding the package:

npx nuxt typecheck

Common issues:

  • Missing type imports
  • Using old AI SDK types
  • Incorrect message structure

Version History

v0.1.0 (Current)

  • Initial release
  • useChat, useCompletion, useAIProvider composables
  • AIChatbox, AIMessage, AIInput components
  • Server-side createAIProvider factory
  • OpenAI and Anthropic provider support
  • Chat conversations schema for persistence
  • Team context integration