
Cloud-side JavaScript "Promise" handling update
Make sure to use "catch" method on promises!

Make sure to use "catch" method on promises!

OpenAI has recently announced GA version of their Realtime API that Voximplant now fully supports

New integrations for Voice AI have arrived: Google's Gemini 2.0 Flash model, featuring seamless voice-to-voice conversation capabilities and ElevenLabs low-latency streaming speech synthesis are now available for Voximplant developers

Learn how a Voice AI Orchestration Platform connects LLMs, STT/TTS, turn‑taking, and telephony (PSTN, SIP, WebRTC) to build reliable real‑time voice agents. See benefits, architecture, and how Voximplant helps.

New Features in Voximplant Kit: Update overview We are constantly working to improve our product to make it easier to use and more effective for you. In this update, we have added several useful features. Here’s what’s new:

Voximplant now includes a native Cartesia module for streaming, low-latency text-to-speech (TTS). You can use a single VoxEngine API to synthesize speech in real time, connect it to any call (PSTN, SIP, WebRTC, WhatsApp) and control playback from a Large Language Model (LLM) or other source, all inside VoxEngine.

Voximplant now lets developers build full-cascade voice AI pipelines in VoxEngine without sacrificing turn-taking quality.

Connect any Voximplant call to ElevenLabs Conversational AI agents

Voximplant now includes a native Deepgram module that connects any Voximplant call to Deepgram’s Voice Agent API for real-time, speech‑to‑speech conversations. You can stream audio from phone numbers, SIP trunks, WhatsApp, or WebRTC into Deepgram’s unified agent environment—combining STT, LLM reasoning, and TTS—and play responses via Voximplant’s serverless runtime with minimal latency.