useMcpApp() bridge
useMcpApp()
The single client-side composable, auto-imported into every MCP App SFC. It returns everything the iframe needs to talk to the host:
const {
data, // Ref<T | null> — hydrated from structuredContent, refreshed by callTool
loading, // Ref<boolean> — true until first payload arrives
error, // Ref<Error | null> — bridge / transport / payload errors
pending, // Ref<boolean> — true while a callTool() is in flight
hostContext, // Ref<HostContext | null> — theme, displayMode, locale, …
callTool, // (name, params?) => Promise<T | null> — re-invoke any MCP tool
sendPrompt, // (prompt: string) => void — push a message into the chat
openLink, // (url: string) => void — ask the host to open a URL
} = useMcpApp<MyPayload>()
Pass your payload type as the generic to get full inference downstream.
data & loading
data is already populated on first render when the handler returns structuredContent. loading starts as true and becomes false after the first payload arrives. Use pending for in-flight callTool() refreshes:
<template>
<section v-if="loading" class="skeleton" />
<section v-else-if="data" class="content">
{{ data.swatches.length }} swatches from {{ data.base }}
</section>
</template>
hostContext
The host hands the iframe a context object during the ui/initialize handshake. Use it to adapt to dark mode, fullscreen, or a fixed iframe size:
interface HostContext {
theme?: 'light' | 'dark'
displayMode?: 'inline' | 'fullscreen' | 'pip'
containerDimensions?: { width?: number, height?: number, maxWidth?: number, maxHeight?: number }
locale?: string
timeZone?: string
platform?: 'web' | 'desktop' | 'mobile'
}
<script setup lang="ts">
const { hostContext } = useMcpApp()
const isDark = computed(() => hostContext.value?.theme === 'dark')
const isFullscreen = computed(() => hostContext.value?.displayMode === 'fullscreen')
</script>
<template>
<main :data-theme="isDark ? 'dark' : 'light'" :data-mode="isFullscreen ? 'fullscreen' : 'inline'">
…
</main>
</template>
hostContext is null on the very first paint and populates after the handshake (typically <50 ms). Always use a fallback in your template.sendPrompt(prompt) — Follow-Ups
Push a message into the chat as if the user had typed it. The LLM then routes it like any other request — including invoking another MCP App:
<button @click="sendPrompt(`Use ${swatch.name} (${swatch.hex}) as the brand colour.`)">
Use this colour
</button>
The host receives the prompt as if the user had typed it. The LLM may reply, call another tool, or open a different MCP App in response — app-to-app workflows fall out of this primitive.
ui/message forward the prompt cleanly. ChatGPT acknowledges the request but doesn't always re-render the next tool inline (an upstream limitation).callTool(name, params) — In-Place Refresh
Re-invoke any MCP tool from the iframe. The result replaces data automatically:
<script setup lang="ts">
const { data, pending, callTool } = useMcpApp<PalettePayload>()
async function refresh(base: string) {
await callTool('color-picker', { base })
}
</script>
Use this for filters, pagination, refresh buttons — anything that changes the query without a full chat round-trip.
openLink(url)
Sandbox iframes can't open windows. openLink asks the host to do it for you (e.g. open a booking confirmation in a new browser tab):
<button @click="openLink(`https://example.com/colors/${swatch.hex.slice(1)}`)">
View on the web
</button>
csp.connectDomains if you also need to fetch() it from the iframe.