> For a complete documentation index, fetch https://docs.voximplant.ai/llms.txt

# Example: Placing an outbound call

> This example starts a VoxEngine session, places an outbound PSTN call, and bridges audio to OpenAI Realtime once the callee answers.

<blockquote>
  For the complete documentation index, see <a href="/llms.txt">llms.txt</a>.
</blockquote>

This example starts a VoxEngine session, places an outbound PSTN call, and bridges audio to OpenAI Realtime once the callee answers.

**Jump to the [Full VoxEngine scenario](#full-voxengine-scenario).**

## Prerequisites

* Store your OpenAI API key in Voximplant [Secrets](/platform/voxengine/secrets) under `OPENAI_API_KEY`.
* Ensure outbound calling is enabled for your Voximplant application and that your caller ID is verified.

## Outbound call parameters

The example expects destination and caller ID in `customData` (read via `VoxEngine.customData()`):

```json title="Custom data example"
{"destination":"+15551234567","callerId":"+15557654321"}
```

## Launch the routing rule

For quick testing, you can start this outbound scenario from the Voximplant Control Panel:

1. Open your Voximplant application and go to the **Routing** tab.
2. Select the routing rule that has this scenario attached.
3. Click **Run**.
4. Provide **Custom data** (max 200 bytes) with `destination` and `callerId`:

```json title="Custom data example"
{"destination":"+15551234567","callerId":"+15557654321"}
```

For production, start the routing rule via Management API `startScenarios` (pass `rule_id`, and pass the same JSON string in `script_custom_data`): [https://voximplant.com/docs/references/httpapi/scenarios#startscenarios](https://voximplant.com/docs/references/httpapi/scenarios#startscenarios)

## Alternate outbound destinations

This example uses `VoxEngine.callPSTN(...)` for PSTN dialing. You can also route outbound calls to other destination types in VoxEngine:

* **SIP** (`VoxEngine.callSIP`): dial a SIP URI to reach a PBX, carrier, SIP trunk, or other SIP endpoint.
* **WhatsApp** (`VoxEngine.callWhatsappUser`): place a WhatsApp Business-initiated call (requires a WhatsApp Business account and enabled numbers).
* **Voximplant users** (`VoxEngine.callUser`): calls another app user inside the same Voximplant application such as web SDK, mobile SDK, or SIP user.

Relevant guides:

* [SIP calling options](/getting-started/network-options/sip)
* [Voximplant users calling options](/getting-started/network-options/web-mobile)
* [WhatsApp calling options](/getting-started/network-options/whatsapp)

## Connect call audio

After the callee answers, the example bridges audio both ways:

```js title="Connect call audio"
VoxEngine.sendMediaBetween(call, voiceAIClient);
```

## Barge-in

```js title="Barge-in"
voiceAIClient.addEventListener(OpenAI.RealtimeAPIEvents.InputAudioBufferSpeechStarted, () => {
  voiceAIClient.clearMediaBuffer();
});
```

## Notes

[See the VoxEngine API Reference for more details](https://voximplant.com/docs/references/voxengine/openai).

## Full VoxEngine scenario

```javascript title={"voxeengine-openai-place-outbound-call.js"} maxLines={0}
/**
 * Voximplant + OpenAI Realtime API connector demo
 * Scenario: place an outbound PSTN call and bridge it to OpenAI Realtime.
 */

require(Modules.OpenAI);
const SYSTEM_PROMPT = `
You are Voxi, a concise phone assistant for outbound calls.
Keep responses short and helpful.
`;

const SESSION_CONFIG = {
    session: {
        type: "realtime",
        instructions: SYSTEM_PROMPT,
        voice: "alloy",
        output_modalities: ["audio"],
        turn_detection: {type: "server_vad", interrupt_response: true},
    },
};

const MAX_CALL_MS = 2 * 60 * 1000;

VoxEngine.addEventListener(AppEvents.Started, async () => {
    let call;
    let voiceAIClient;
    let hangupTimer;

    try {
        // Custom data example: {"destination":"+15551234567","callerId":"+15557654321"}
        const {destination, callerId} = JSON.parse(VoxEngine.customData());

        call = VoxEngine.callPSTN(destination, callerId);
        // Alternative outbound paths (uncomment to use):
        // call = VoxEngine.callUser({username: destination, callerid: callerId});
        // call = VoxEngine.callSIP(`sip:${destination}@your-sip-domain`, callerId);
        // call = VoxEngine.callWhatsappUser({number: destination, callerid: callerId});

        call.addEventListener(CallEvents.Failed, () => VoxEngine.terminate());
        call.addEventListener(CallEvents.Disconnected, () => {
            if (hangupTimer) clearTimeout(hangupTimer);
            VoxEngine.terminate();
        });

        call.addEventListener(CallEvents.Connected, async () => {
            hangupTimer = setTimeout(() => {
                Logger.write("===HANGUP_TIMER===");
                call.hangup();
            }, MAX_CALL_MS);

            voiceAIClient = await OpenAI.createRealtimeAPIClient({
                apiKey: VoxEngine.getSecretValue('OPENAI_API_KEY'),
                model: "gpt-realtime-1.5",
                onWebSocketClose: (event) => {
                    Logger.write("===OpenAI.WebSocket.Close===");
                    if (event) Logger.write(JSON.stringify(event));
                    VoxEngine.terminate();
                },
            });

            voiceAIClient.addEventListener(OpenAI.RealtimeAPIEvents.SessionCreated, () => {
                voiceAIClient.sessionUpdate(SESSION_CONFIG);
            });

            voiceAIClient.addEventListener(OpenAI.RealtimeAPIEvents.SessionUpdated, () => {
                VoxEngine.sendMediaBetween(call, voiceAIClient);
                voiceAIClient.responseCreate({instructions: "Hello! This is Voxi. How can I help today?"});
            });

            voiceAIClient.addEventListener(
                OpenAI.RealtimeAPIEvents.InputAudioBufferSpeechStarted,
                () => {
                    Logger.write("===BARGE-IN: OpenAI.InputAudioBufferSpeechStarted===");
                    voiceAIClient.clearMediaBuffer();
                }
            );

            // Consolidated "log-only" handlers
            [
                OpenAI.RealtimeAPIEvents.ResponseCreated,
                OpenAI.RealtimeAPIEvents.ResponseDone,
                OpenAI.RealtimeAPIEvents.ResponseOutputAudioDone,
                OpenAI.RealtimeAPIEvents.ConversationItemInputAudioTranscriptionCompleted,
                OpenAI.RealtimeAPIEvents.ResponseOutputAudioTranscriptDone,
                OpenAI.RealtimeAPIEvents.ConnectorInformation,
                OpenAI.RealtimeAPIEvents.HTTPResponse,
                OpenAI.RealtimeAPIEvents.WebSocketError,
                OpenAI.RealtimeAPIEvents.Unknown,
                OpenAI.Events.WebSocketMediaStarted,
                OpenAI.Events.WebSocketMediaEnded,
            ].forEach((eventName) => {
                voiceAIClient.addEventListener(eventName, (event) => {
                    Logger.write(`===${event.name}===`);
                    if (event?.data) Logger.write(JSON.stringify(event.data));
                });
            });
        });
    } catch (error) {
        Logger.write("===UNHANDLED_ERROR===");
        Logger.write(error);
        voiceAIClient?.close();
        VoxEngine.terminate();
    }
});

```