> For a complete documentation index, fetch https://docs.voximplant.ai/llms.txt

# Example: Answering an incoming call

> This example answers an inbound Voximplant call and bridges audio to ElevenLabs Agents for real-time speech-to-speech conversations.

<blockquote>
  For the complete documentation index, see <a href="/llms.txt">llms.txt</a>.
</blockquote>

This example answers an inbound Voximplant call and bridges audio to ElevenLabs Agents for real-time speech-to-speech conversations.

**Jump to the [Full VoxEngine scenario](#full-voxengine-scenario).**

## Prerequisites

* Set up an inbound entrypoint for the caller:
  * Phone number: [https://voximplant.com/docs/getting-started/basic-concepts/phone-numbers](https://voximplant.com/docs/getting-started/basic-concepts/phone-numbers)
  * WhatsApp: [https://voximplant.com/docs/guides/integrations/whatsapp](https://voximplant.com/docs/guides/integrations/whatsapp)
  * SIP user / SIP registration: [https://voximplant.com/docs/guides/calls/sip](https://voximplant.com/docs/guides/calls/sip)
  * App user: [https://voximplant.com/docs/getting-started/basic-concepts/users](https://voximplant.com/docs/getting-started/basic-concepts/users) (see also [https://voximplant.com/docs/guides/calls/scenarios#how-to-call-a-voximplant-user](https://voximplant.com/docs/guides/calls/scenarios#how-to-call-a-voximplant-user))
* Create a routing rule that points the destination (phone number / WhatsApp / SIP username / app user alias) to this scenario: [https://voximplant.com/docs/getting-started/basic-concepts/routing-rules](https://voximplant.com/docs/getting-started/basic-concepts/routing-rules)
* Store your ElevenLabs API key in Voximplant [Secrets](/platform/voxengine/secrets) under `ELEVENLABS_API_KEY`.
* Store your ElevenLabs Agent ID in Voximplant `ApplicationStorage` under `ELEVENLABS_AGENT_ID` (the agent ID is not sensitive, so it can live in key-value storage).

## Session setup

ElevenLabs Agents are configured in the ElevenLabs console. In VoxEngine, you only need the API key and agent ID.

In the full example, the client is created with:

```js title="Create Agents client"
agentsClient = await ElevenLabs.createAgentsClient({
    xiApiKey: VoxEngine.getSecretValue('ELEVENLABS_API_KEY'),  // from Secrets
    agentId: (await ApplicationStorage.get("ELEVENLABS_AGENT_ID")).value,  // from ApplicationStorage
});
```

<Info title="Configure prompts and tools in ElevenLabs">
  Prompts, voices, and tools live in your ElevenLabs Agent configuration. Update them in the ElevenLabs console and reuse the same agent ID in VoxEngine.
</Info>

## Connect call audio

Once you have an `ElevenLabs.AgentsClient`, bridge audio both ways between the call and the agent:

```js title="Connect call audio"
VoxEngine.sendMediaBetween(call, agentsClient);
```

## Barge-in

To keep the conversation interruption-friendly, the example listens for `ElevenLabs.AgentsEvents.Interruption` and clears the media buffer so any in-progress TTS audio is canceled when the caller starts talking:

```js title="Barge-in"
agentsClient.addEventListener(ElevenLabs.AgentsEvents.Interruption, () => {
  agentsClient.clearMediaBuffer();
});
```

## Events

The scenario logs transcripts and key lifecycle events. For example:

```js title="Events (example from the scenario)"
agentsClient.addEventListener(ElevenLabs.AgentsEvents.UserTranscript, (event) => {
  const payload = event?.data?.payload || event?.data || {};
  const text = payload.text || payload.transcript || payload.user_transcript;
  if (text) Logger.write(`USER: ${text}`);
});
```

## Notes

[See the VoxEngine API Reference for more details](https://voximplant.com/docs/references/voxengine/elevenlabs).

## Full VoxEngine scenario

```javascript title={"voxeengine-elevenlabs-inbound.js"} maxLines={0}
/**
 * Voximplant + ElevenLabs Agents connector demo
 * Scenario: answer an incoming call and bridge it to ElevenLabs Agents.
 */

require(Modules.ElevenLabs);
require(Modules.ApplicationStorage);


VoxEngine.addEventListener(AppEvents.CallAlerting, async ({call}) => {
    let voiceAIClient;

    // Termination handlers
    call.addEventListener(CallEvents.Disconnected, () => VoxEngine.terminate());
    call.addEventListener(CallEvents.Failed, () => VoxEngine.terminate());

    try {
        call.answer();
        // call.record({hd_audio: true, stereo: true}); // Optional: record the call

        // Create client and connect to ElevenLabs Agents
        voiceAIClient = await ElevenLabs.createAgentsClient({
            xiApiKey: VoxEngine.getSecretValue('ELEVENLABS_API_KEY'),
            agentId: (await ApplicationStorage.get("ELEVENLABS_AGENT_ID")).value,
            onWebSocketClose: (event) => {
                Logger.write("===ElevenLabs.WebSocket.Close===");
                if (event) Logger.write(JSON.stringify(event));
                VoxEngine.terminate();
            },
        });

        // Bridge media between the call and ElevenLabs Agents
        VoxEngine.sendMediaBetween(call, voiceAIClient);

        // ---------------------- Event handlers -----------------------
        // Barge-in: keep conversation responsive
        voiceAIClient.addEventListener(ElevenLabs.AgentsEvents.Interruption, () => {
            Logger.write("===BARGE-IN: ElevenLabs.AgentsEvents.Interruption===");
            voiceAIClient.clearMediaBuffer();
        });

        voiceAIClient.addEventListener(ElevenLabs.AgentsEvents.UserTranscript, (event) => {
            const payload = event?.data?.payload || event?.data || {};
            const text = payload.text || payload.transcript || payload.user_transcript;
            if (text) {
                Logger.write(`===USER=== ${text}`);
            } else {
                Logger.write("===USER_TRANSCRIPT===");
                Logger.write(JSON.stringify(payload));
            }
        });

        voiceAIClient.addEventListener(ElevenLabs.AgentsEvents.AgentResponse, (event) => {
            const payload = event?.data?.payload || event?.data || {};
            const text = payload.text || payload.response || payload.agent_response;
            if (text) {
                Logger.write(`===AGENT=== ${text}`);
            } else {
                Logger.write("===AGENT_RESPONSE===");
                Logger.write(JSON.stringify(payload));
            }
        });

        // Consolidated "log-only" handlers - key ElevenLabs/VoxEngine debugging events
        [
            ElevenLabs.AgentsEvents.ConversationInitiationMetadata,
            ElevenLabs.AgentsEvents.AgentResponseCorrection,
            ElevenLabs.AgentsEvents.ContextualUpdate,
            ElevenLabs.AgentsEvents.AgentToolResponse,
            ElevenLabs.AgentsEvents.VadScore,
            ElevenLabs.AgentsEvents.Ping,
            ElevenLabs.AgentsEvents.HTTPResponse,
            ElevenLabs.AgentsEvents.WebSocketError,
            ElevenLabs.AgentsEvents.ConnectorInformation,
            ElevenLabs.AgentsEvents.Unknown,
            ElevenLabs.Events.WebSocketMediaStarted,
            ElevenLabs.Events.WebSocketMediaEnded,
        ].forEach((eventName) => {
            voiceAIClient.addEventListener(eventName, (event) => {
                Logger.write(`===${event.name}===`);
                if (event?.data) Logger.write(JSON.stringify(event.data));
            });
        });
    } catch (error) {
        Logger.write("===UNHANDLED_ERROR===");
        Logger.write(error);
        voiceAIClient?.close();
        VoxEngine.terminate();
    }
});

```