Example: Answering an incoming call
Example: Answering an incoming call
For the complete documentation index, see llms.txt.
This example answers an inbound Voximplant call and bridges audio to Gemini Live API for real-time speech-to-speech conversations.
⬇️ Jump to the Full VoxEngine scenario.
Gemini 3.1 Flash Live Preview
This page reflects the current gemini-3.1-flash-live-preview flow from Google’s Live API docs:
https://ai.google.dev/gemini-api/docs/models/gemini-3.1-flash-live-preview
Prerequisites
- Set up an inbound entrypoint for the caller:
- Phone number: https://voximplant.com/docs/getting-started/basic-concepts/phone-numbers
- WhatsApp: https://voximplant.com/docs/guides/integrations/whatsapp
- SIP user / SIP registration: https://voximplant.com/docs/guides/calls/sip
- App user: https://voximplant.com/docs/getting-started/basic-concepts/users (see also https://voximplant.com/docs/guides/calls/scenarios#how-to-call-a-voximplant-user)
- Create a routing rule that points the destination (phone number / WhatsApp / SIP username / app user alias) to this scenario: https://voximplant.com/docs/getting-started/basic-concepts/routing-rules
- Store your Gemini API key in Voximplant
ApplicationStorageunderGEMINI_API_KEY.
Session setup
The Gemini Live API session is configured via connectConfig, passed into Gemini.createLiveAPIClient(...).
In the full scenario, see GEMINI_CONNECT_CONFIG:
systemInstructionmaps directly toSYSTEM_PROMPT, defining the agent’s behavior.responseModalities: ["AUDIO"]asks Gemini to speak back over the call.thinkingConfig: { thinkingLevel: "minimal" }keeps the voice session responsive.inputAudioTranscriptionandoutputAudioTranscriptionare enabled soServerContentincludes user + agent text.
Transcription logging
If you don’t need transcript logs, you can remove inputAudioTranscription and outputAudioTranscription.
Connect call audio
Once the Gemini Live API session is ready, bridge audio between the call and Gemini:
In the example, this happens in the Gemini.LiveAPIEvents.SetupComplete handler, after the Gemini session is ready. The same handler also sends a starter message to trigger the greeting:
Barge-in
Gemini includes an interrupted flag in ServerContent when the caller starts speaking during TTS. The example clears the media buffer so the agent stops speaking immediately:
Events
The scenario listens for Gemini.LiveAPIEvents.ServerContent to capture transcript text:
For illustration, the example also logs all Gemini events:
Gemini.LiveAPIEvents:SetupComplete,ServerContent,ToolCall,ToolCallCancellation,ConnectorInformation,UnknownGemini.Events:WebSocketMediaStarted,WebSocketMediaEnded
Notes
- The example uses the Gemini Developer API (
Gemini.Backend.GEMINI_API), not Vertex AI. - The current sample uses
gemini-3.1-flash-live-preview. inputAudioTranscriptionandoutputAudioTranscriptionare enabled so you can log user and agent text inServerContentevents.
Gemini 2.5 compatibility
If you are adapting an older gemini-2.5-flash-native-audio-preview-12-2025 sample, the main startup difference is that 3.1 should use sendRealtimeInput(...) for the initial greeting trigger instead of sendClientContent(...). 3.1 also uses thinkingLevel instead of the older thinkingBudget field.
See the VoxEngine API Reference for more details.