Example: Answering an incoming call

View as Markdown

For the complete documentation index, see llms.txt.

This example answers an inbound Voximplant call and bridges audio to Gemini Live API for real-time speech-to-speech conversations.

⬇️ Jump to the Full VoxEngine scenario.

Gemini 3.1 Flash Live Preview

This page reflects the current gemini-3.1-flash-live-preview flow from Google’s Live API docs: https://ai.google.dev/gemini-api/docs/models/gemini-3.1-flash-live-preview

Prerequisites

Session setup

The Gemini Live API session is configured via connectConfig, passed into Gemini.createLiveAPIClient(...).

In the full scenario, see GEMINI_CONNECT_CONFIG:

  • systemInstruction maps directly to SYSTEM_PROMPT, defining the agent’s behavior.
  • responseModalities: ["AUDIO"] asks Gemini to speak back over the call.
  • thinkingConfig: { thinkingLevel: "minimal" } keeps the voice session responsive.
  • inputAudioTranscription and outputAudioTranscription are enabled so ServerContent includes user + agent text.
Transcription logging

If you don’t need transcript logs, you can remove inputAudioTranscription and outputAudioTranscription.

Connect call audio

Once the Gemini Live API session is ready, bridge audio between the call and Gemini:

Connect call audio
1VoxEngine.sendMediaBetween(call, geminiLiveAPIClient);

In the example, this happens in the Gemini.LiveAPIEvents.SetupComplete handler, after the Gemini session is ready. The same handler also sends a starter message to trigger the greeting:

Trigger the greeting
1geminiLiveAPIClient.sendRealtimeInput({
2 text: GREETING_TRIGGER,
3});

Barge-in

Gemini includes an interrupted flag in ServerContent when the caller starts speaking during TTS. The example clears the media buffer so the agent stops speaking immediately:

Barge-in handling
1if (payload.interrupted) {
2 geminiLiveAPIClient.clearMediaBuffer();
3}

Events

The scenario listens for Gemini.LiveAPIEvents.ServerContent to capture transcript text:

Transcripts
1geminiLiveAPIClient.addEventListener(Gemini.LiveAPIEvents.ServerContent, (event) => {
2 const payload = event?.data?.payload || {};
3 if (payload.inputTranscription?.text) Logger.write(payload.inputTranscription.text);
4 if (payload.outputTranscription?.text) Logger.write(payload.outputTranscription.text);
5});

For illustration, the example also logs all Gemini events:

  • Gemini.LiveAPIEvents: SetupComplete, ServerContent, ToolCall, ToolCallCancellation, ConnectorInformation, Unknown
  • Gemini.Events: WebSocketMediaStarted, WebSocketMediaEnded

Notes

  • The example uses the Gemini Developer API (Gemini.Backend.GEMINI_API), not Vertex AI.
  • The current sample uses gemini-3.1-flash-live-preview.
  • inputAudioTranscription and outputAudioTranscription are enabled so you can log user and agent text in ServerContent events.
Gemini 2.5 compatibility

If you are adapting an older gemini-2.5-flash-native-audio-preview-12-2025 sample, the main startup difference is that 3.1 should use sendRealtimeInput(...) for the initial greeting trigger instead of sendClientContent(...). 3.1 also uses thinkingLevel instead of the older thinkingBudget field.

See the VoxEngine API Reference for more details.

Full VoxEngine scenario

voxeengine-gemini-answer-incoming-call.js
1/**
2 * Voximplant + Gemini Live API connector demo
3 * Scenario: answer an incoming call and bridge it to Gemini Live API.
4 */
5
6require(Modules.Gemini);
7require(Modules.ApplicationStorage);
8
9const SYSTEM_PROMPT = `You are Voxi, a helpful voice assistant for phone callers.
10Keep responses short and telephony-friendly (usually 1-2 sentences).`;
11
12// -------------------- Gemini Live API settings --------------------
13const CONNECT_CONFIG = {
14 responseModalities: ["AUDIO"],
15 thinkingConfig: {thinkingLevel: "minimal"},
16 speechConfig: {
17 voiceConfig: {
18 prebuiltVoiceConfig: {voiceName: "Aoede"},
19 },
20 },
21 systemInstruction: {
22 parts: [{text: SYSTEM_PROMPT}],
23 },
24 inputAudioTranscription: {},
25 outputAudioTranscription: {},
26};
27
28VoxEngine.addEventListener(AppEvents.CallAlerting, async ({call}) => {
29 let voiceAIClient;
30
31 // Termination functions - add cleanup and logging as needed
32 call.addEventListener(CallEvents.Disconnected, ()=>VoxEngine.terminate());
33 call.addEventListener(CallEvents.Failed, ()=>VoxEngine.terminate());
34
35 try {
36 call.answer();
37 // call.record({ hd_audio: true, stereo: true }); // Optional: record the call
38
39 // Create client and connect to Gemini Live API
40 voiceAIClient = await Gemini.createLiveAPIClient({
41 apiKey: (await ApplicationStorage.get("GEMINI_API_KEY")).value,
42 model: "gemini-3.1-flash-live-preview",
43 backend: Gemini.Backend.GEMINI_API,
44 connectConfig: CONNECT_CONFIG,
45 onWebSocketClose: (event) => {
46 Logger.write("===Gemini.WebSocket.Close===");
47 if (event) Logger.write(JSON.stringify(event));
48 VoxEngine.terminate();
49 },
50 });
51
52 // ---------------------- Event handlers -----------------------
53 // Wait for Gemini setup, then bridge audio and trigger the greeting
54 voiceAIClient.addEventListener(Gemini.LiveAPIEvents.SetupComplete, () => {
55 VoxEngine.sendMediaBetween(call, voiceAIClient);
56 voiceAIClient.sendRealtimeInput({
57 text: "Say hello and ask how you can help.",
58 });
59 });
60
61 // Capture transcripts + handle barge-in
62 voiceAIClient.addEventListener(Gemini.LiveAPIEvents.ServerContent, (event) => {
63 const payload = event?.data?.payload || {};
64 if (payload.inputTranscription?.text) {
65 Logger.write(`===USER=== ${payload.inputTranscription.text}`);
66 }
67 if (payload.outputTranscription?.text) {
68 Logger.write(`===AGENT=== ${payload.outputTranscription.text}`);
69 }
70 if (payload.interrupted) {
71 Logger.write("===BARGE-IN=== Gemini.LiveAPIEvents.ServerContent");
72 voiceAIClient.clearMediaBuffer();
73 }
74 });
75
76 // Log all Gemini events for illustration/debugging
77 [
78 Gemini.LiveAPIEvents.SetupComplete,
79 Gemini.LiveAPIEvents.ServerContent,
80 Gemini.LiveAPIEvents.ToolCall,
81 Gemini.LiveAPIEvents.ToolCallCancellation,
82 Gemini.LiveAPIEvents.ConnectorInformation,
83 Gemini.LiveAPIEvents.Unknown,
84 Gemini.Events.WebSocketMediaStarted,
85 Gemini.Events.WebSocketMediaEnded,
86 ].forEach((eventName) => {
87 voiceAIClient.addEventListener(eventName, (event) => {
88 Logger.write(`===${event.name}===`);
89 if (event?.data) Logger.write(JSON.stringify(event.data));
90 });
91 });
92 } catch (error) {
93 Logger.write("===SOMETHING_WENT_WRONG===");
94 Logger.write(error);
95 voiceAIClient?.close();
96 VoxEngine.terminate(); }
97});