> For a complete documentation index, fetch https://docs.voximplant.ai/llms.txt

# Example: Function calling

> This example answers an inbound call, enables Gemini tool calling, and sends tool responses back to the model.

<blockquote>
  For the complete documentation index, see <a href="/llms.txt">llms.txt</a>.
</blockquote>

This example answers an inbound call, enables Gemini tool calling, and sends tool responses back to the model.

**⬇️ Jump to the [Full VoxEngine scenario](#full-voxengine-scenario).**

<Info title="Gemini 3.1 Flash Live Preview">
  This page reflects the current `gemini-3.1-flash-live-preview` flow from Google’s Live API docs:
  [https://ai.google.dev/gemini-api/docs/models/gemini-3.1-flash-live-preview](https://ai.google.dev/gemini-api/docs/models/gemini-3.1-flash-live-preview)
</Info>

## Prerequisites

* Set up an inbound entrypoint for the caller:
  * Phone number: [https://voximplant.com/docs/getting-started/basic-concepts/phone-numbers](https://voximplant.com/docs/getting-started/basic-concepts/phone-numbers)
  * WhatsApp: [https://voximplant.com/docs/guides/integrations/whatsapp](https://voximplant.com/docs/guides/integrations/whatsapp)
  * SIP user / SIP registration: [https://voximplant.com/docs/guides/calls/sip](https://voximplant.com/docs/guides/calls/sip)
  * App user: [https://voximplant.com/docs/getting-started/basic-concepts/users](https://voximplant.com/docs/getting-started/basic-concepts/users) (see also [https://voximplant.com/docs/guides/calls/scenarios#how-to-call-a-voximplant-user](https://voximplant.com/docs/guides/calls/scenarios#how-to-call-a-voximplant-user))
* Create a routing rule that points the destination (phone number / WhatsApp / SIP username / app user alias) to this scenario: [https://voximplant.com/docs/getting-started/basic-concepts/routing-rules](https://voximplant.com/docs/getting-started/basic-concepts/routing-rules)
* Store your Gemini API key in Voximplant [Secrets](/platform/voxengine/secrets) under `GEMINI_API_KEY`.

## Tool definitions

The full example also uses `thinkingConfig: { thinkingLevel: "minimal" }` to keep tool-calling turns responsive.

The example registers tools in `connectConfig.tools` using `functionDeclarations` and JSON Schema parameters:

```js
const TOOLS = [
  {
    functionDeclarations: [
      {
        name: "get_weather",
        description: "Get current weather for a location (demo stub)",
        parametersJsonSchema: {
          type: "object",
          properties: {
            location: { type: "string" },
          },
          required: ["location"],
        },
      },
    ],
  },
];
```

## Handling tool calls

Gemini sends tool calls via `Gemini.LiveAPIEvents.ToolCall`. Respond with `sendToolResponse`:

```js
geminiLiveAPIClient.addEventListener(Gemini.LiveAPIEvents.ToolCall, (event) => {
  const functionCalls = event?.data?.payload?.functionCalls || [];
  const responses = functionCalls.map((fn) => ({
    id: fn.id,
    name: fn.name,
    response: { output: { ok: true } },
  }));
  geminiLiveAPIClient.sendToolResponse({ functionResponses: responses });
});
```

## Notes

* The example uses the Gemini Developer API (`Gemini.Backend.GEMINI_API`), not Vertex AI.
* The current sample uses `gemini-3.1-flash-live-preview`.
* Tool implementations in the example are stubs. Replace with real integrations as needed.

<Warning title="Gemini 2.5 compatibility">
  If you are updating a `2.5` Gemini Live function-calling sample, switch the startup prompt from `sendClientContent(...)` to `sendRealtimeInput(...)` for `3.1`. Also replace the older `thinkingBudget` field with `thinkingLevel`.
</Warning>

[See the VoxEngine API Reference for more details](https://voximplant.com/docs/references/voxengine/gemini).

## Full VoxEngine scenario

```javascript title={"voxeengine-gemini-function-calling.js"} maxLines={0}
/**
 * Voximplant + Gemini Live API connector demo
 * Scenario: answer an incoming call and handle Gemini tool calls.
 */

require(Modules.Gemini);
const SYSTEM_INSTRUCTION = `
You are Voxi, a helpful voice assistant for phone callers.
Keep responses short and telephony-friendly (usually 1-2 sentences).
If the caller asks about the weather, call the get_weather tool.
If the caller wants to end the call, call the hangup_call tool.
`;

const CONNECT_CONFIG = {
    responseModalities: ["AUDIO"],
    thinkingConfig: {thinkingLevel: "minimal"},
    systemInstruction: {
        parts: [{text: SYSTEM_INSTRUCTION}],
    },
    tools: [
        {
            functionDeclarations: [
                {
                    name: "get_weather",
                    description: "Get current weather for a location (demo stub)",
                    parametersJsonSchema: {
                        type: "object",
                        properties: {
                            location: {
                                type: "string",
                                description: "City name, for example: San Francisco",
                            },
                        },
                        required: ["location"],
                    },
                },
                {
                    name: "hangup_call",
                    description: "Hang up the current call",
                    parametersJsonSchema: {
                        type: "object",
                        properties: {},
                        required: [],
                    },
                },
            ],
        },
    ],
    inputAudioTranscription: {},
    outputAudioTranscription: {},
};

VoxEngine.addEventListener(AppEvents.CallAlerting, async ({call}) => {
    let voiceAIClient;

    // Termination functions - add cleanup and logging as needed
    call.addEventListener(CallEvents.Disconnected, VoxEngine.terminate);
    call.addEventListener(CallEvents.Failed, VoxEngine.terminate);

    try {
        call.answer();

        voiceAIClient = await Gemini.createLiveAPIClient({
            apiKey: VoxEngine.getSecretValue('GEMINI_API_KEY'),
            model: "gemini-3.1-flash-live-preview",
            backend: Gemini.Backend.GEMINI_API,
            connectConfig: CONNECT_CONFIG,
            onWebSocketClose: (event) => {
                Logger.write("===Gemini.WebSocket.Close===");
                if (event) Logger.write(JSON.stringify(event));
                VoxEngine.terminate();
            },
        });

        voiceAIClient.addEventListener(Gemini.LiveAPIEvents.SetupComplete, () => {
            VoxEngine.sendMediaBetween(call, voiceAIClient);
            voiceAIClient.sendRealtimeInput({
                text: "Say hello and ask how you can help.",
            });
        });

        voiceAIClient.addEventListener(Gemini.LiveAPIEvents.ToolCall, (event) => {
            const functionCalls = event?.data?.payload?.functionCalls || [];
            if (!functionCalls.length) return;

            const responses = functionCalls.map((fn) => {
                const {id, name, args} = fn || {};
                if (!id || !name) return null;

                if (name === "get_weather") {
                    const location = args?.location || "Unknown";
                    return {
                        id,
                        name,
                        response: {
                            output: {
                                location,
                                temperature_f: 72,
                                condition: "sunny",
                            },
                        },
                    };
                }

                if (name === "hangup_call") {
                    // hang up after 5 seconds to allow message playback
                    setTimeout(()=>{
                        call.hangup();
                        VoxEngine.terminate();
                    }, 5_000);
                    return {
                        id,
                        name,
                        response: {
                            output: {
                                result: "Hanging up now. Goodbye!",
                            },
                        },
                    };
                }

                return {
                    id,
                    name,
                    response: {
                        error: `Unhandled tool: ${name}`,
                    },
                };
            }).filter(Boolean);

            if (responses.length) {
                voiceAIClient.sendToolResponse({
                    functionResponses: responses,
                });
            }
        });

        // handle barge-in
        voiceAIClient.addEventListener(Gemini.LiveAPIEvents.ServerContent, (event) => {
            const payload = event?.data?.payload || {};
            if (payload.interrupted) {
                Logger.write("===BARGE-IN=== Gemini.LiveAPIEvents.ServerContent");
                voiceAIClient.clearMediaBuffer();
            }
        });

        [
            Gemini.LiveAPIEvents.SetupComplete,
            Gemini.LiveAPIEvents.ServerContent,
            Gemini.LiveAPIEvents.ToolCall,
            Gemini.LiveAPIEvents.ToolCallCancellation,
            Gemini.LiveAPIEvents.ConnectorInformation,
            Gemini.LiveAPIEvents.Unknown,
            Gemini.Events.WebSocketMediaStarted,
            Gemini.Events.WebSocketMediaEnded,
        ].forEach((eventName) => {
            voiceAIClient.addEventListener(eventName, (event) => {
                Logger.write(`===${event.name}===`);
                if (event?.data) Logger.write(JSON.stringify(event.data));
            });
        });
    } catch (error) {
        Logger.write("===SOMETHING_WENT_WRONG===");
        Logger.write(error);
        voiceAIClient?.close();
        VoxEngine.terminate();
    }
});

```