*** ## title: 'Example: Function calling' This example answers an inbound Voximplant call, connects it to Deepgram Voice Agent, and handles function calls (tool requests) from the agent inside VoxEngine. **⬇️ Jump to the [Full VoxEngine scenario](#full-voxengine-scenario).** ## Prerequisites * Set up an inbound entrypoint for the caller: * Phone number: [https://voximplant.com/docs/getting-started/basic-concepts/phone-numbers](https://voximplant.com/docs/getting-started/basic-concepts/phone-numbers) * WhatsApp: [https://voximplant.com/docs/guides/integrations/whatsapp](https://voximplant.com/docs/guides/integrations/whatsapp) * SIP user / SIP registration: [https://voximplant.com/docs/guides/calls/sip](https://voximplant.com/docs/guides/calls/sip) * Voximplant user: [https://voximplant.com/docs/getting-started/basic-concepts/users](https://voximplant.com/docs/getting-started/basic-concepts/users) (see also [https://voximplant.com/docs/guides/calls/scenarios#how-to-call-a-voximplant-user](https://voximplant.com/docs/guides/calls/scenarios#how-to-call-a-voximplant-user)) * Create a routing rule that points the destination (number / WhatsApp / SIP username) to this scenario: [https://voximplant.com/docs/getting-started/basic-concepts/routing-rules](https://voximplant.com/docs/getting-started/basic-concepts/routing-rules) * Store your Deepgram API key secret value in Voximplant `ApplicationStorage` under `DEEPGRAM_API_KEY`. ## Session setup The Voice Agent session is configured via a `settingsOptions` object passed to `Deepgram.createVoiceAgentClient(...)`. For function calling, the key part is `SETTINGS_OPTIONS.agent.think.functions`: * `name`, `description`, `parameters`: define the tool schema the LLM can call * `client_side: true`: ensures VoxEngine receives `FunctionCallRequest` and can respond ## Connect call audio Once the `Deepgram.VoiceAgentClient` is created, bridge audio both ways: ```js title="Connect call audio" VoxEngine.sendMediaBetween(call, voiceAgentClient); ``` ## Function calling In the scenario, `Deepgram.VoiceAgentEvents.FunctionCallRequest` delivers one or more function calls requested by the agent. Each call includes an `id`, a `name`, and JSON string `arguments`. Respond using `voiceAgentClient.sendFunctionCallResponse(...)` with a `FunctionCallResponse` message containing the same `id`: ```js title="Handle FunctionCallRequest and respond" voiceAgentClient.addEventListener(Deepgram.VoiceAgentEvents.FunctionCallRequest, (event) => { const { functions } = event?.data?.payload || {}; if (!Array.isArray(functions)) return; functions.forEach(({ id, name, arguments: rawArguments }) => { // ...perform the function... voiceAgentClient.sendFunctionCallResponse({ type: "FunctionCallResponse", id, name, content: JSON.stringify({ ok: true }), }); }); }); ``` ## Barge-in As in the other examples, keep the experience interruption-friendly by clearing buffered audio on `UserStartedSpeaking`: ```js title="Barge-in" voiceAgentClient.addEventListener(Deepgram.VoiceAgentEvents.UserStartedSpeaking, () => { voiceAgentClient.clearMediaBuffer(); }); ``` ## Events The function calling flow is driven by: * `Deepgram.VoiceAgentEvents.FunctionCallRequest`: agent requests a client-side function execution * `Deepgram.VoiceAgentEvents.FunctionCallResponse`: emitted when a function call response is processed For the complete list of supported events: * Voice Agent events: [https://voximplant.com/docs/references/voxengine/deepgram/voiceagentevents](https://voximplant.com/docs/references/voxengine/deepgram/voiceagentevents) * WebSocket media events: [https://voximplant.com/docs/references/voxengine/deepgram/events](https://voximplant.com/docs/references/voxengine/deepgram/events) ## Notes [See the VoxEngine API Reference for more details](https://voximplant.com/docs/references/voxengine/deepgram). ## Full VoxEngine scenario ```javascript title={"voxeengine-deepgram-function-calling.js"} maxLines={0} /** * Voximplant + Deepgram Voice Agent connector demo * Scenario: answer an incoming call and handle Deepgram function calling. */ require(Modules.Deepgram); require(Modules.ApplicationStorage); const SYSTEM_PROMPT = ` You are a helpful English-speaking voice assistant for phone callers. Keep your turns short and telephony-friendly (usually 1–2 sentences). If the caller asks about the weather, call the "get_weather" function. `; // -------------------- Deepgram Voice Agent settings -------------------- const SETTINGS_OPTIONS = { tags: ["voximplant", "deepgram", "voice_agent_connector", "function_calling_demo"], agent: { language: "en", greeting: "Hi! I'm Voxi. How can I help today?", listen: { provider: { type: "deepgram", model: "flux-general-en", }, }, think: { provider: { type: "open_ai", model: "gpt-4o-mini", }, prompt: SYSTEM_PROMPT, functions: [ { name: "get_weather", description: "Get current weather for a location (demo stub)", parameters: { type: "object", properties: { location: { type: "string", description: "City name, for example: San Francisco", }, }, required: ["location"], }, // Mark as client-side so VoxEngine receives FunctionCallRequest and can respond. client_side: true, }, ], }, speak: { provider: { type: "deepgram", model: "aura-2-cordelia-en", }, }, }, }; VoxEngine.addEventListener(AppEvents.CallAlerting, async ({call}) => { let voiceAIClient; // Termination functions - add cleanup and logging as needed call.addEventListener(CallEvents.Disconnected, ()=>VoxEngine.terminate()); call.addEventListener(CallEvents.Failed, ()=>VoxEngine.terminate()); try { call.answer(); // call.record({hd_audio: true, stereo: true}); // optional: call recording // Create client and wire media voiceAIClient = await Deepgram.createVoiceAgentClient({ apiKey: (await ApplicationStorage.get("DEEPGRAM_API_KEY")).value, settingsOptions: SETTINGS_OPTIONS, }); VoxEngine.sendMediaBetween(call, voiceAIClient); // ---------------------- Event handlers ----------------------- // Barge-in: keep conversation responsive voiceAIClient.addEventListener(Deepgram.VoiceAgentEvents.UserStartedSpeaking, () => { Logger.write("===BARGE-IN: Deepgram.VoiceAgentEvents.UserStartedSpeaking==="); voiceAIClient.clearMediaBuffer(); }); // Function calling: handle tool requests and send back responses voiceAIClient.addEventListener(Deepgram.VoiceAgentEvents.FunctionCallRequest, (event) => { const {functions} = event?.data?.payload || {}; if (!Array.isArray(functions) || functions.length === 0) return; functions.forEach((fn) => { const {id, name, arguments: rawArguments} = fn || {}; if (!id || !name) return; if (name !== "get_weather") { voiceAIClient.sendFunctionCallResponse({ type: "FunctionCallResponse", id, name, content: JSON.stringify({error: `Unhandled function: ${name}`}), }); return; } let args = {}; try { args = rawArguments ? JSON.parse(rawArguments) : {}; } catch (error) { Logger.write(`===FUNCTION_ARGS_PARSE_ERROR=== ${rawArguments}`); Logger.write(error); } const location = args.location || "Unknown"; // Demo response (no external API call) const result = { location, temperature_f: 72, condition: "sunny", }; voiceAIClient.sendFunctionCallResponse({ type: "FunctionCallResponse", id, name, content: JSON.stringify(result), }); }); }); } catch (error) { Logger.write("===UNHANDLED_ERROR==="); Logger.write(error); VoxEngine.terminate(); } }); ```