The AI package extends Nuxt Crouton with AI-powered chat and text completion functionality. Built on the Vercel AI SDK, it provides streaming chat interfaces, multi-provider support (OpenAI and Anthropic), and ready-to-use Vue components.
@fyit/crouton-aiComposables (4):
useChat() - Streaming chat with conversation historyuseCompletion() - Single-turn text completionuseAIProvider() - Provider and model configurationuseTranslationSuggestion() - AI-powered translation suggestionsComponents (5):
AIChatbox - Complete chat interface with messages and inputAIMessage - Individual message bubble componentAIInput - Message input with send buttonAIPageGenerator - AI-powered page content generationAITranslateButton - One-click AI translation triggerServer Utilities:
createAIProvider() - Server-side provider factoryServer API Endpoints (in /api/ai/):
translate.post.ts - Translate text contenttranslate-blocks.post.ts - Translate block-based contentgenerate-email-template.post.ts - Generate email template contentgenerate-page.post.ts - Generate page contentchat.post.ts endpoint. The useChat composable defaults to /api/ai/chat, but you must create this endpoint in your own app (see Server Usage for examples).Integration:
gpt-* → OpenAI, claude-* → Anthropic)Before installing, ensure you have:
@fyit/crouton installedpnpm add @fyit/crouton-ai
Add the AI layer to your nuxt.config.ts:
export default defineNuxtConfig({
extends: [
'@fyit/crouton',
'@fyit/crouton-ai'
],
runtimeConfig: {
// Server-side (private)
openaiApiKey: '', // Set via NUXT_OPENAI_API_KEY
anthropicApiKey: '', // Set via NUXT_ANTHROPIC_API_KEY
// Client-side (public)
public: {
croutonAI: {
defaultProvider: 'openai',
defaultModel: 'gpt-4o'
}
}
}
})
Create or update your .env file:
# For OpenAI models (gpt-4o, gpt-4-turbo, o1, etc.)
NUXT_OPENAI_API_KEY=sk-...
# For Anthropic models (claude-sonnet-4, claude-opus-4, etc.)
NUXT_ANTHROPIC_API_KEY=sk-ant-...
Create a simple chat page:
<template>
<div class="max-w-2xl mx-auto p-4">
<h1 class="text-2xl font-bold mb-4">AI Chat</h1>
<div class="h-[600px]">
<AIChatbox
system-prompt="You are a helpful assistant."
placeholder="Ask me anything..."
/>
</div>
</div>
</template>
For more control, use the useChat composable directly:
<template>
<div class="space-y-4">
<!-- Messages -->
<div v-for="message in messages" :key="message.id" class="p-3 rounded-lg"
:class="message.role === 'user' ? 'bg-blue-100 ml-12' : 'bg-gray-100 mr-12'">
<p class="text-sm font-medium">{{ message.role }}</p>
<p>{{ message.content }}</p>
</div>
<!-- Input -->
<form @submit.prevent="handleSubmit" class="flex gap-2">
<input
v-model="input"
placeholder="Type a message..."
class="flex-1 px-4 py-2 border rounded-lg"
:disabled="isLoading"
/>
<button
type="submit"
:disabled="isLoading || !input.trim()"
class="px-4 py-2 bg-blue-500 text-white rounded-lg disabled:opacity-50"
>
{{ isLoading ? 'Sending...' : 'Send' }}
</button>
</form>
</div>
</template>
<script setup lang="ts">
const { messages, input, handleSubmit, isLoading } = useChat({
model: 'gpt-4o',
systemPrompt: 'You are a helpful assistant.',
onFinish: (message) => {
console.log('Response complete:', message.content)
},
onError: (error) => {
console.error('Chat error:', error)
}
})
</script>
The primary composable for chat functionality. Wraps the Vercel AI SDK's useChat with Crouton-specific defaults.
interface AIChatOptions {
/** API endpoint for chat (default: '/api/ai/chat') */
api?: string
/** Unique identifier for the chat session */
id?: string
/** Provider to use (e.g., 'openai', 'anthropic') */
provider?: string
/** Model to use (e.g., 'gpt-4o', 'claude-sonnet-4-20250514') */
model?: string
/** System prompt to set context */
systemPrompt?: string
/** Initial messages to populate the chat */
initialMessages?: AIMessage[]
/** Initial input value */
initialInput?: string
/** Additional body parameters to send with each request */
body?: Record<string, unknown>
/** Additional headers to send with each request */
headers?: Record<string, string> | Headers
/** Credentials mode for fetch requests */
credentials?: 'omit' | 'same-origin' | 'include'
/** Callback when a message is complete */
onFinish?: (message: AIMessage) => void
/** Callback when an error occurs */
onError?: (error: Error) => void
/** Callback when a response is received */
onResponse?: (response: Response) => void | Promise<void>
}
{
// Core state
messages: ComputedRef<AIMessage[]> // Conversation history
input: Ref<string> // Current input value
isLoading: ComputedRef<boolean> // Whether request is in progress
error: Ref<Error | undefined> // Current error state
status: Ref<'idle' | 'streaming' | 'submitted'>
// Actions
handleSubmit: () => void // Submit current input
stop: () => void // Stop streaming response
reload: () => void // Regenerate last response
append: (message: AIMessage) => void // Add message to history
setMessages: (messages: AIMessage[]) => void
// Crouton helpers
clearMessages: () => void // Clear all messages
exportMessages: () => AIMessage[] // Export for persistence
importMessages: (messages: AIMessage[]) => void // Restore messages
}
With System Prompt:
const { messages, input, handleSubmit, isLoading } = useChat({
model: 'gpt-4o',
systemPrompt: `You are an expert customer support agent for Acme Corp.
Be helpful, friendly, and concise.
Always greet customers by name when provided.`
})
With Initial Messages:
const { messages, handleSubmit } = useChat({
initialMessages: [
{ id: '1', role: 'assistant', content: 'Hello! How can I help you today?' }
]
})
With Callbacks:
const { messages, handleSubmit } = useChat({
onFinish: (message) => {
// Save to database, track analytics, etc.
saveConversation(messages.value)
},
onError: (error) => {
toast.add({
title: 'Error',
description: error.message,
color: 'red'
})
}
})
For single-turn text completion without conversation history.
interface AICompletionOptions {
/** API endpoint for completion (default: '/api/ai/completion') */
api?: string
/** Provider to use */
provider?: string
/** Model to use */
model?: string
/** Additional body parameters */
body?: Record<string, unknown>
/** Additional headers */
headers?: Record<string, string> | Headers
/** Credentials mode */
credentials?: 'omit' | 'same-origin' | 'include'
/** Callback when completion is finished */
onFinish?: (completion: string) => void
/** Callback when an error occurs */
onError?: (error: Error) => void
}
{
completion: Ref<string> // Current completion text
complete: (prompt: string) => Promise<void> // Trigger completion
input: Ref<string> // Input value
isLoading: Ref<boolean> // Loading state
error: Ref<Error | undefined> // Error state
stop: () => void // Stop generation
setCompletion: (text: string) => void
// Crouton helpers
clearCompletion: () => void // Clear completion text
}
<template>
<div class="space-y-4">
<UTextarea v-model="textToSummarize" placeholder="Paste text to summarize..." />
<UButton @click="summarize" :loading="isLoading">
Summarize
</UButton>
<div v-if="completion" class="p-4 bg-gray-100 rounded-lg">
<h3 class="font-medium mb-2">Summary:</h3>
<p>{{ completion }}</p>
</div>
</div>
</template>
<script setup lang="ts">
const textToSummarize = ref('')
const { completion, complete, isLoading } = useCompletion({
model: 'gpt-4o-mini' // Use faster model for summaries
})
const summarize = async () => {
await complete(`Please summarize the following text in 2-3 sentences:\n\n${textToSummarize.value}`)
}
</script>
Access provider configuration and model information.
{
/** Default provider from config */
defaultProvider: ComputedRef<string>
/** Default model from config */
defaultModel: ComputedRef<string>
/** List of all available providers */
providers: AIProvider[]
/** Model information by ID */
models: Record<string, AIModel>
// Helper functions
getProvider: (providerId: string) => AIProvider | undefined
getModel: (modelId: string) => AIModel | undefined
getModelsForProvider: (providerId: string) => AIModel[]
isModelFromProvider: (modelId: string, providerId: string) => boolean
detectProviderFromModel: (modelId: string) => string | undefined
}
<template>
<div class="space-y-4">
<UFormField label="Provider">
<USelectMenu v-model="selectedProvider" :items="providers" value-key="id" label-key="name" />
</UFormField>
<UFormField label="Model">
<USelectMenu v-model="selectedModel" :items="availableModels" value-key="id" label-key="name" />
</UFormField>
</div>
</template>
<script setup lang="ts">
const { providers, getModelsForProvider, defaultProvider, defaultModel } = useAIProvider()
const selectedProvider = ref(defaultProvider.value)
const selectedModel = ref(defaultModel.value)
const availableModels = computed(() =>
getModelsForProvider(selectedProvider.value)
)
</script>
All components are auto-imported with the AI prefix.
Complete chat interface with messages area, error handling, and input.
interface AIChatboxProps {
/** API endpoint for chat (default: '/api/ai/chat') */
api?: string
/** System prompt to set context */
systemPrompt?: string
/** Placeholder text for input */
placeholder?: string
/** Message shown when there are no messages */
emptyMessage?: string
/** Provider to use */
provider?: string
/** Model to use */
model?: string
/** Initial messages */
initialMessages?: AIMessage[]
}
@finish: (message: AIMessage) => void // Emitted when response completes
@error: (error: Error) => void // Emitted on errors
The component exposes its internal state for programmatic control:
<template>
<div>
<AIChatbox ref="chatbox" />
<UButton @click="clearChat">Clear Chat</UButton>
</div>
</template>
<script setup lang="ts">
const chatbox = ref()
const clearChat = () => {
chatbox.value?.clearMessages()
}
</script>
Basic:
<template>
<div class="h-[600px]">
<AIChatbox system-prompt="You are a helpful coding assistant." />
</div>
</template>
With Custom Model:
<template>
<AIChatbox
model="claude-sonnet-4-20250514"
system-prompt="You are Claude, a helpful AI assistant."
placeholder="Chat with Claude..."
/>
</template>
Individual message bubble component.
interface AIMessageProps {
/** The message to display */
message: AIMessage
/** Whether this message is currently streaming */
isStreaming?: boolean
}
<template>
<div class="space-y-4">
<AIMessage
v-for="message in messages"
:key="message.id"
:message="message"
:is-streaming="isLoading && message === messages[messages.length - 1]"
/>
</div>
</template>
Message input with send button.
interface AIInputProps {
/** Current input value (v-model) */
modelValue?: string
/** Whether the input is in loading state */
loading?: boolean
/** Placeholder text */
placeholder?: string
/** Whether the input is disabled */
disabled?: boolean
}
@update:modelValue: (value: string) => void
@submit: () => void // Emitted when user submits (Enter or click)
<template>
<AIInput
v-model="input"
:loading="isLoading"
placeholder="Type your message..."
@submit="handleSubmit"
/>
</template>
The package provides a createAIProvider() factory for server-side AI operations.
// server/api/ai/chat.post.ts
// createAIProvider is auto-imported when extending the layer
import { streamText } from 'ai'
export default defineEventHandler(async (event) => {
const { messages, model } = await readBody(event)
const ai = createAIProvider(event)
const result = await streamText({
model: ai.model(model || 'gpt-4o'),
messages
})
return result.toDataStreamResponse()
})
// server/api/ai/chat.post.ts
// createAIProvider is auto-imported when extending the layer
import { streamText } from 'ai'
export default defineEventHandler(async (event) => {
const { messages, model, systemPrompt } = await readBody(event)
const ai = createAIProvider(event)
// Prepend system message if provided
const allMessages = systemPrompt
? [{ role: 'system', content: systemPrompt }, ...messages]
: messages
const result = await streamText({
model: ai.model(model || 'gpt-4o'),
messages: allMessages
})
return result.toDataStreamResponse()
})
// server/api/ai/generate.post.ts
// createAIProvider is auto-imported when extending the layer
import { generateText } from 'ai'
export default defineEventHandler(async (event) => {
const { prompt, model } = await readBody(event)
const ai = createAIProvider(event)
const result = await generateText({
model: ai.model(model || 'gpt-4o'),
prompt
})
return { text: result.text }
})
The ai.model() function automatically detects the provider from the model ID:
const ai = createAIProvider(event)
// OpenAI models
ai.model('gpt-4o') // → Uses OpenAI
ai.model('gpt-4-turbo') // → Uses OpenAI
ai.model('o1') // → Uses OpenAI
ai.model('o1-mini') // → Uses OpenAI
ai.model('o3-mini') // → Uses OpenAI
// Anthropic models
ai.model('claude-sonnet-4-20250514') // → Uses Anthropic
ai.model('claude-opus-4-20250514') // → Uses Anthropic
ai.model('claude-3-5-sonnet-20241022') // → Uses Anthropic
For advanced use cases, access providers directly:
const ai = createAIProvider(event)
// Get OpenAI provider
const openai = ai.openai()
const gpt4 = openai('gpt-4o')
// Get Anthropic provider
const anthropic = ai.anthropic()
const claude = anthropic('claude-sonnet-4-20250514')
Supported Models:
| Model | Description |
|---|---|
gpt-4o | Most capable, great for complex tasks |
gpt-4o-mini | Fast and cost-effective for simpler tasks |
gpt-4-turbo | High capability with larger context window |
o1 | Advanced reasoning model for complex problems |
o1-mini | Fast reasoning model |
Configuration:
NUXT_OPENAI_API_KEY=sk-...
Supported Models:
| Model | Description |
|---|---|
claude-sonnet-4-20250514 | Balanced performance and speed |
claude-opus-4-20250514 | Most capable Anthropic model |
claude-3-5-sonnet-20241022 | Previous generation, reliable performance |
Configuration:
NUXT_ANTHROPIC_API_KEY=sk-ant-...
The package includes a JSON schema for generating a chat conversations collection with the Crouton generator.
pnpm crouton generate ai chatConversations \
--fields-file=node_modules/@fyit/crouton-ai/schemas/chat-conversations.json \
--dialect=sqlite
This creates a collection with these fields:
| Field | Type | Description |
|---|---|---|
title | string | Conversation title |
messages | json | Array of chat messages |
provider | string | AI provider used |
model | string | Model identifier |
systemPrompt | text | System prompt used |
metadata | json | Additional metadata |
messageCount | number | Cached message count |
lastMessageAt | date | Last message timestamp |
<script setup lang="ts">
const route = useRoute()
const teamId = route.params.team as string
const { messages, input, handleSubmit, exportMessages } = useChat({
model: 'gpt-4o',
onFinish: async (message) => {
// Auto-save after each response
await saveConversation()
}
})
const conversationId = ref<string | null>(null)
const saveConversation = async () => {
const payload = {
title: messages.value[0]?.content.slice(0, 50) || 'New Conversation',
messages: exportMessages(),
provider: 'openai',
model: 'gpt-4o',
messageCount: messages.value.length,
lastMessageAt: new Date()
}
if (conversationId.value) {
await $fetch(`/api/teams/${teamId}/chatConversations/${conversationId.value}`, {
method: 'PUT',
body: payload
})
} else {
const result = await $fetch(`/api/teams/${teamId}/chatConversations`, {
method: 'POST',
body: payload
})
conversationId.value = result.id
}
}
</script>
<script setup lang="ts">
const route = useRoute()
const teamId = route.params.team as string
const conversationId = route.params.id as string
const { messages, importMessages } = useChat()
// Load existing conversation
const { data: conversation } = await useFetch(
`/api/teams/${teamId}/chatConversations/${conversationId}`
)
// Restore messages
if (conversation.value?.messages) {
importMessages(conversation.value.messages)
}
</script>
All types are exported for use in your application:
import type {
AIMessage,
AIProvider,
AIModel,
AIChatOptions,
AICompletionOptions,
AIChatboxProps,
AIMessageProps,
AIInputProps
} from '@fyit/crouton-ai/types'
interface AIMessage {
/** Unique identifier for the message */
id: string
/** The role of the message sender */
role: 'user' | 'assistant' | 'system'
/** The content of the message */
content: string
/** When the message was created */
createdAt?: Date
}
// nuxt.config.ts
export default defineNuxtConfig({
runtimeConfig: {
// Server-only (not exposed to client)
openaiApiKey: '', // Set via NUXT_OPENAI_API_KEY
anthropicApiKey: '', // Set via NUXT_ANTHROPIC_API_KEY
// Public (safe for client)
public: {
croutonAI: {
defaultProvider: 'openai',
defaultModel: 'gpt-4o'
}
}
}
})
Implement rate limiting on your API endpoints:
// server/api/ai/chat.post.ts
// createAIProvider is auto-imported when extending the layer
import { streamText } from 'ai'
export default defineEventHandler(async (event) => {
// Get user/team from session
const user = await requireAuth(event)
// Check rate limit (implement your own logic)
const { allowed, remaining } = await checkRateLimit(user.id, 'ai-chat')
if (!allowed) {
throw createError({
status: 429,
statusText: 'Rate limit exceeded. Please try again later.'
})
}
const { messages, model } = await readBody(event)
const ai = createAIProvider(event)
const result = await streamText({
model: ai.model(model || 'gpt-4o'),
messages
})
return result.toDataStreamResponse()
})
Always handle errors gracefully:
<script setup lang="ts">
const { messages, handleSubmit, error } = useChat({
onError: (err) => {
// Log for debugging
console.error('Chat error:', err)
// Show user-friendly message
toast.add({
title: 'Something went wrong',
description: 'Please try again or contact support if the problem persists.',
color: 'red'
})
}
})
</script>
<template>
<div>
<!-- Show error state -->
<UAlert v-if="error" color="red" :title="error.message" />
<!-- Chat interface -->
<AIChatbox />
</div>
</template>
Choose the right model for the task:
// For simple tasks, use mini models
const { complete } = useCompletion({
model: 'gpt-4o-mini' // Faster, cheaper
})
// For complex reasoning, use full models
const { messages, handleSubmit } = useChat({
model: 'gpt-4o' // More capable
})
// For advanced reasoning, use o1 models
const { messages, handleSubmit } = useChat({
model: 'o1' // Best reasoning, highest cost
})
Check environment variable names:
# Correct
NUXT_OPENAI_API_KEY=sk-...
NUXT_ANTHROPIC_API_KEY=sk-ant-...
# Wrong (missing NUXT_ prefix)
OPENAI_API_KEY=sk-...
Verify key in runtime config:
// server/api/debug.get.ts (development only!)
export default defineEventHandler((event) => {
const config = useRuntimeConfig()
return {
hasOpenAI: !!config.openaiApiKey,
hasAnthropic: !!config.anthropicApiKey
}
})
Ensure your API endpoint returns a data stream response:
// Correct
return result.toDataStreamResponse()
// Wrong - returns object, not stream
return { text: result.text }
Check that your messages have the correct structure:
// Correct
const message: AIMessage = {
id: 'unique-id',
role: 'user', // or 'assistant' or 'system'
content: 'Hello!'
}
// Wrong - missing required fields
const message = {
text: 'Hello!'
}
Run typecheck after adding the package:
npx nuxt typecheck
Common issues:
v0.1.0 (Current)