> For a complete documentation index, fetch https://docs.voximplant.ai/llms.txt

# Example: Answering an incoming call

> This example answers an inbound Voximplant call and bridges audio to Deepgram Voice Agent for real-time speech‑to‑speech conversations.

<blockquote>
  For the complete documentation index, see <a href="/llms.txt">llms.txt</a>.
</blockquote>

This example answers an inbound Voximplant call and bridges audio to Deepgram Voice Agent for real-time speech‑to‑speech conversations.

**⬇️ Jump to the [Full VoxEngine scenario](#full-voxengine-scenario).**

## Prerequisites

* Set up an inbound entrypoint for the caller:
  * Phone number: [https://voximplant.com/docs/getting-started/basic-concepts/phone-numbers](https://voximplant.com/docs/getting-started/basic-concepts/phone-numbers)
  * WhatsApp: [https://voximplant.com/docs/guides/integrations/whatsapp](https://voximplant.com/docs/guides/integrations/whatsapp)
  * SIP user / SIP registration: [https://voximplant.com/docs/guides/calls/sip](https://voximplant.com/docs/guides/calls/sip)
  * App user: [https://voximplant.com/docs/getting-started/basic-concepts/users](https://voximplant.com/docs/getting-started/basic-concepts/users) (see also [https://voximplant.com/docs/guides/calls/scenarios#how-to-call-a-voximplant-user](https://voximplant.com/docs/guides/calls/scenarios#how-to-call-a-voximplant-user))
* Create a routing rule that points the destination (phone number / WhatsApp / SIP username / app user alias) to this scenario: [https://voximplant.com/docs/getting-started/basic-concepts/routing-rules](https://voximplant.com/docs/getting-started/basic-concepts/routing-rules)
* Store your Deepgram API key in Voximplant [Secrets](/platform/voxengine/secrets) under `DEEPGRAM_API_KEY`.

## Session setup

The Deepgram Voice Agent session is configured via a `settingsOptions` object that’s passed directly to `Deepgram.createVoiceAgentClient(...)`.

In the full example, see `SETTINGS_OPTIONS.agent`:

* `listen`: speech-to-text provider (for example `type: "deepgram"`)
* `think`: LLM provider + prompt
* `speak`: text-to-speech provider

<Info title="Don’t set audio settings">
  When using the VoxEngine connector, do not include `audio` settings in the Deepgram settings object — audio is handled by the connector for optimal telephony quality.
</Info>

## Connect call audio

Once you’ve created the `Deepgram.VoiceAgentClient`, you need to connect audio both ways between the phone call and the agent.

In the example, this is done with:

```js title="Connect call audio"
VoxEngine.sendMediaBetween(call, voiceAgentClient);
```

This bridges media between a `Call` and a `Deepgram.VoiceAgentClient`. You can also do the same thing explicitly, one direction at a time, using `sendMediaTo`:

```js title="Equivalent using sendMediaTo"
call.sendMediaTo(voiceAgentClient);
voiceAgentClient.sendMediaTo(call);
```

## Mid-session updates

After the client is created (see `voiceAgentClient` in the scenario), you can update the agent mid-call without reconnecting using `Deepgram.VoiceAgentClient` methods.

These methods are documented in the Voximplant reference:

* [https://voximplant.com/docs/references/voxengine/deepgram/voiceagentclient](https://voximplant.com/docs/references/voxengine/deepgram/voiceagentclient)

```js title="Mid-session updates"
// Update the agent’s system prompt mid-call (sendUpdatePrompt)
voiceAgentClient.sendUpdatePrompt({
  type: "UpdatePrompt",
  prompt: "You are now a helpful travel assistant. Keep responses to 1 sentence.",
});

// Change the TTS voice/model mid-call (sendUpdateSpeak)
voiceAgentClient.sendUpdateSpeak({
  type: "UpdateSpeak",
  speak: {
    provider: {
      type: "deepgram",
      model: "aura-2-luna-en",
    },
  },
});

// Inject text messages for deterministic flows (sendInjectUserMessage / sendInjectAgentMessage)
voiceAgentClient.sendInjectUserMessage({
  type: "InjectUserMessage",
  content: "Before we continue, ask the caller for their order number.",
});

voiceAgentClient.sendInjectAgentMessage({
  type: "InjectAgentMessage",
  message: "Sure — what is your order number?",
});
```

## Barge-in

To keep the conversation interruption-friendly, the example listens for `Deepgram.VoiceAgentEvents.UserStartedSpeaking` and clears the media buffer so any in-progress TTS audio is canceled when the caller starts talking:

```js title="Barge-in"
voiceAgentClient.addEventListener(Deepgram.VoiceAgentEvents.UserStartedSpeaking, () => {
  voiceAgentClient.clearMediaBuffer();
});
```

## Events

In the scenario, a focused handler captures the transcript via `Deepgram.VoiceAgentEvents.ConversationText`, and a second handler logs a set of lifecycle/debug events:

```js title="Events (example from the scenario)"
voiceAgentClient.addEventListener(Deepgram.VoiceAgentEvents.ConversationText, (event) => {
  const { role, text } = event?.data?.payload || {};
  if (role && text) Logger.write(`${role}: ${text}`);
});
```

The client supports both **Deepgram Voice Agent events** and **Deepgram WebSocket media events** (see `addEventListener` / `removeEventListener` in the VoiceAgentClient reference):

* `Deepgram.VoiceAgentEvents`: `Welcome`, `SettingsApplied`, `AgentThinking`, `ConversationText`, `UserStartedSpeaking`, `AgentAudioDone`, `FunctionCallRequest`, `FunctionCallResponse`, `PromptUpdated`, `SpeakUpdated`, `History`, `HTTPResponse`, `ConnectorInformation`, `Warning`, `Error`, `WebSocketError`, `Unknown`
* `Deepgram.Events`: `WebSocketMediaStarted`, `WebSocketMediaEnded`

## Notes

[See the VoxEngine API Reference for more details](https://voximplant.com/docs/references/voxengine/deepgram).

## Full VoxEngine scenario

```javascript title={"voxeengine-deepgram-answer-incoming-call.js"} maxLines={0}
/**
 * Voximplant + Deepgram Voice Agent connector demo
 * Scenario: answer an incoming call and bridge it to Deepgram Voice Agent.
 */

require(Modules.Deepgram);
const SYSTEM_PROMPT = `You are a helpful English-speaking voice assistant for phone callers. Keep your turns short and telephony-friendly (usually 1–2 sentences).`;

// -------------------- Deepgram Voice Agent settings --------------------
const SETTINGS_OPTIONS = {
    tags: ["voximplant", "deepgram", "voice_agent_connector", "incoming_call_demo"],
    agent: {
        language: "en",
        greeting: "Hi! I'm Voxi. How can I help today?",
        listen: {
            provider: {
                type: "deepgram",
                model: "flux-general-en",
                // ... additional provider options
            },
        },
        think: {
            provider: {
                type: "open_ai",
                model: "gpt-4o-mini",
                // ... additional provider options

            },
            prompt: SYSTEM_PROMPT,
            // ... additional think options: context_length, functions, endpoint

        },
        speak: {
            provider: {
                type: "deepgram",
                model: "aura-2-cordelia-en",
                // Examples for other providers:
                // "type": "open_ai", "model": "tts-1", "voice": "alloy"
                // "type": "eleven_labs", "model_id": "eleven_monolingual_v1", "language_code": "en-US"
                // "type": "cartesia", "model_id": "sonic-2", "voice": {"mode": "id", "id": "voice-id"}, "language": "en"
                // "type": "aws_polly", "voice": "Matthew", "language_code": "en-US", "engine": "standard", "credentials": {...}
            },
            // ... additional speak options: endpoint (required for non-deepgram providers)
        },
        // ... additional agent options: context, greeting
    },
};

VoxEngine.addEventListener(AppEvents.CallAlerting, async ({call}) => {
    let voiceAIClient;

    // Termination functions - add cleanup and logging as needed
    call.addEventListener(CallEvents.Disconnected, ()=>VoxEngine.terminate());
    call.addEventListener(CallEvents.Failed, ()=>VoxEngine.terminate());


    try {
        call.answer();
        // call.record({ hd_audio: true, stereo: true });   // Optional: record the call

        // Create client and wire media
        voiceAIClient = await Deepgram.createVoiceAgentClient({
            apiKey: VoxEngine.getSecretValue('DEEPGRAM_API_KEY'), // Add your Deepgram API key to Voximplant Secrets under `DEEPGRAM_API_KEY`
            settingsOptions: SETTINGS_OPTIONS,
        });
        VoxEngine.sendMediaBetween(call, voiceAIClient);


        // ---------------------- Event handlers -----------------------
        // Barge-in: keep conversation responsive
        voiceAIClient.addEventListener(Deepgram.VoiceAgentEvents.UserStartedSpeaking, () => {
            Logger.write("===BARGE-IN: Deepgram.VoiceAgentEvents.UserStartedSpeaking===");
            voiceAIClient.clearMediaBuffer();
        });

        // Capture transcript
        voiceAIClient.addEventListener(Deepgram.VoiceAgentEvents.ConversationText, (event) => {
            const {role, text} = event?.data?.payload || {};
            if (role && text) {
                Logger.write(`===TRANSCRIPT=== ${role}: ${text}`);
            }
        });

        // Consolidated "log-only" handlers - key Deepgram/VoxEngine debugging events
        [
            Deepgram.VoiceAgentEvents.Welcome,
            Deepgram.VoiceAgentEvents.SettingsApplied,
            Deepgram.VoiceAgentEvents.AgentThinking,
            Deepgram.VoiceAgentEvents.AgentAudioDone,
            Deepgram.VoiceAgentEvents.ConnectorInformation,
            Deepgram.VoiceAgentEvents.HTTPResponse,
            Deepgram.VoiceAgentEvents.Warning,
            Deepgram.VoiceAgentEvents.Error,
            Deepgram.VoiceAgentEvents.Unknown,
            Deepgram.Events.WebSocketMediaStarted,
            Deepgram.Events.WebSocketMediaEnded,
        ].forEach((eventName) => {
            voiceAIClient.addEventListener(eventName, (event) => {
                Logger.write(`===${event.name}===`);
                Logger.write(JSON.stringify(event));
            });
        });
    } catch (e) {
        Logger.write("===UNHANDLED_ERROR===");
        Logger.write(e);
        voiceAIClient?.close();
        VoxEngine.terminate();
    }
});

```