Web SDK v5 migration guide
Voximplant Web SDK v5 has introduced major breaking changes to provide more functionality and improve the SDK scalability. This guide explains how to smoothly migrate the application code base to use the new API.
Contents
What is new
Voximplant Web SDK v5 introduces a modular architecture that provides greater flexibility and optimizations for web applications:
Performance: fast initialization by loading only the necessary modules
Configuration flexibility: enable only the functionality you need
Smaller bundle size: tree shaking reduces the overall application size
Stability: module isolation prevents cascading failures — an error in one module does not affect others
Easier debugging: separation of functionality simplifies issue isolation
Scalability: easier adoption of new SDK features
Add Voximplant SDK to your project
You can install Voximplant Web SDK via npm or yarn:
npm install @voximplant/websdk --save
# OR
yarn add @voximplant/websdk
Modular system and SDK initialization
Voximplant Web SDK v5 comes with a modular architecture.
First, you need to initialize the Core instance.
SDK initialization parameters in v4 (Voximplant.Config) are removed and replaced with the parameters in other APIs:
Voximplant.Config.H264First → CallSettings.preferredVideoCodec and ConferenceSettings.preferredVideoCodec
Voximplant.Config.enableTrace, Voximplant.Config.showDebugInfo, Voximplant.Config.showWarnings are replaced with the new logger configuration. See the “Capture SDK logs” section for more details
Voximplant.Config.localVideoContainerId, Voximplant.Config.remoteVideoContainerId, Voximplant.Config.videoContainerId are removed. See the “Stream module” section for more details
Voximplant.Config.rtcStatsCollectionInternal → CallSettings.statsCollectionInterval and ConferenceSettings.statsCollectionInterval
From v4:
const sdk = VoxImplant.getInstance();
await sdk.init({
node: $ACCOUNT_NODE
});
To v5:
import { Core } from '@voximplant/websdk';
const core = Core.init({}); // NOTE: the account node is now set on the connection stage
The Core acts as an IoC container for all Voximplant Web SDK modules. Each module has its own loader that creates and configures the module, and a token that allows you to obtain the module instance from anywhere in your app.
Modules:
Client — provides seamless connectivity and authentication for your app users on the Voximplant cloud platform. Registered by default after Core.init call and available via the Core.client property
Call module — provides the API to initiate and manage voice and video calls
Conference module — enables you to build audio and video conferencing solutions
Messaging module — allows you to build robust messaging features, including public and private groups, channels, or private chats
Stream module — provides the API to manage audio and video streams
SmartQueue module — lets you change and monitor call center agent status for calls and messaging
NoiseSuppressionBalanced — offers advanced noise suppression for audio calls with moderate CPU usage
NoiseSuppressionAggressive — provides advanced noise suppression for audio calls in noisy environments
Register modules
import { Core, StreamLoader, CallLoader } from '@voximplant/websdk';
const core = Core.init({});
// module is registered synchronously
core.registerModules([
StreamLoader(),
CallLoader(),
]);
// module is registered asynchronously
import('@voximplant/websdk/modules/noise-suppression-balanced')
.then( noiseSuppressionModule => {
const { NoiseSuppressionBalancedLoader } = noiseSuppressionModule
core.registerModules([NoiseSuppressionBalancedLoader()])
})
Get module
// returns undefined if module is not registered
const streamModule = core.getModule(streamToken);
// resolves after the NoiseSuppressionBalanced module has registered, never rejects
const noiseSuppressionModule = await core.getModuleAsync(noiseSuppressionBalancedToken)
Watchable
Voximplant Web SDK v5 provides a new mechanism to handle the SDK properties changes — the Watchable reactive type. Watchables are used across all SDK modules.
Watchables can be of 2 types:
ReadOnlyWatchable — reactive type for immutable structures and primitives with read-only values
Watchable (read-write) — reactive type for immutable structures and primitives with the value that can be read or set anytime
To subscribe to a value change, use the Watchable.watch API that allows specifying a callback function and options to customize the behavior.
Example:
core.client.state.watch((newState) => {
if (newState === ClientState.Disconnected) {
// Process user disconnected, i.e. redirect to login screen
}
})
Connect and authenticate the Voximplant users
The Client interface provides the API to connect and authenticate the Voximplant users.
Notable changes:
Account node is now a required parameter for the Client.connect API and is not required on the SDK initialization
The result of connect and login operations can be handled only with the promises that the SDK returns. Voximplant.Events.ConnectionEstablished, Voximplant.Events.ConnectionFailed, and Voximplant.Events.AuthResults events are removed
Error message descriptions represented by the string type are replaced with error objects ConnectionErrors and LoginErrors thrown by the SDK API
Method parameters are represented as separate interfaces (e.g., PasswordLoginOptions). This improves code readability
ClientEvent.Disconnected event provides the reason the connection to the Voximplant Cloud is closed
Client.getClientState is replaced with a read-only Client.state watchable property
To (un)subscribe to the client events, use Client.addEventListener instead of Client.on and Client.removeEventListener instead of Client.off
Connect and log in with password
From v4:
const sdk = VoxImplant.getInstance();
const ACCOUNT_NODE = VoxImplant.ConnectionNode.${NODE_n}
await sdk.init({
node: ACCOUNT_NODE
});
await sdk.connect();
await sdk.login("username@app.account.voximplant.com","password");
To v5:
import { Core } from '@voximplant/websdk';
const core = Core.init({});
const { client } = core;
const ACCOUNT_NODE = ConnectionNode.${NODE_n}
await client.connect({
node: ACCOUNT_NODE,
});
await client.login({
username: "username@app.account.voximplant.com",
password: "password",
});
Connect and log in with one time key
From v4:
const username = `username@$app.account.voximplant.com`;
const sdk = VoxImplant.getInstance();
sdk.init();
// Connect to the cloud and request a key
sdk.connect().then(() => voximplant.requestOneTimeLoginKey(username));
// Listen to the server response
sdk.addEventListener(VoxImplant.Events.AuthResult, async (e) => {
if (e.result) {
// Login is successful
} else if (e.code == 302) {
const { key } = e;
// IMPORTANT: You should always calculate the token on your backend!
const hashRequest = await fetch('https://your.backend.com/', {
method: 'POST',
body: JSON.stringify({
username,
key,
}),
});
const hash = await hashRequest.json();
voximplant.loginWithOneTimeKey(username, token);
}
});
To v5:
import { Core } from '@voximplant/websdk';
const core = Core.init({});
const { client } = core;
const ACCOUNT_NODE = ConnectionNode.${NODE_n};
const username = "username@app.account.voximplant.com";
await client.connect({
node: ACCOUNT_NODE,
});
const key = await client.requestOneTimeKey({
username,
});
// IMPORTANT: You should always calculate the token on your backend!
const hashRequest = await fetch('https://your.backend.com/', {
method: 'POST',
body: JSON.stringify({
username,
key,
}),
});
const hash = await hashRequest.json();
const loginData = await client.loginOneTimeKey({
username,
hash,
});
Stream module
Voximplant Web SDK v5 provides a separate Stream module to manage audio and video streams. Unlike the previous SDK version, audio/video streams should be created and managed on the application side. This approach gives more flexibility for the application to implement such functionality as video preview and camera configuration.
The Stream module gives the ability to manage streams in 2 ways:
- Manually via a low level API represented by the StreamManager and Hardware interfaces
- With the helpers:
- DeviceTrackerHelper — simplifies handling audio and video device changes during an active call or conference.
- AudioProcessor — gives the ability to preprocess audio before sending it within a call or conference
This guide provides the code samples via the DeviceTrackerHelper API
First, you need to register the Stream module:
import { streamToken, StreamLoader } from '@voximplant/websdk/modules/stream';
const core = Core.init({});
core.registerModules([StreamLoader()]);
Create a DeviceTrackerHelper instance and enable it:
const streamModule = core.getModule(streamToken)!;
const deviceTracker = streamModule.createHelper(StreamHelper.DeviceTracker);
deviceTracker.enableTracker();
Show the local preview
Voximplant Web SDK v5 has changed the approach to render local and remote streams. To render a stream, you should create a renderer (AudioRenderer or VideoRenderer) and mount it yourself to the DOM. The RendererManager interface provides the API to manage existing renderers.
It is important to remove audio and video renderers when a stream associated with the renderer does not exist anymore, for example, when a call or conference has ended, video sending is disabled on a participant side, or local preview has hidden.
For the remote streams — handle call or conferences events, the local streams — handle the user interactions on the UI side.
From v4:
await sdk.init({
node: VoxImplant.ConnectionNode.NODE_n,
// id of the HTMLElement that is used as a default container for local video elements
localVideoContainerId: 'local_video_holder',
});
await sdk.showLocalVideo(true);
To v5:
const streamModule = core.getModule(streamToken);
const deviceTracker = streamModule.createHelper(StreamHelper.DeviceTracker);
deviceTracker.enableTracker();
await deviceTracker.startPreviewVideo();
const stream = deviceTracker.previewVideoStream.value;
const renderer = streamModule.rendererManager.createVideoRenderer(stream);
const videoElement = renderer.getElement();
// mount videoElement to any DOM element
document.getElementById('local_video_holder')?.appendChild(videoElement);
Calls
Voximplant Web SDK v5 has the separate classes that represent a call and a conference. It provides a clearer visibility of the SDK functionality for calls and conferences.
Notable changes:
Separated call and conference settings to start a call or a conference
Separated call and conferences events
Call events do not provide the Call instance that triggered an event, instead the call id is provided
Endpoints are now available only for the conferences. Information about another call participant as well as its events is now available via the Call API
Updated call states that can be tracked via the Call.state watchable
Screen sharing video stream does not replace a video stream in a video call. Remote participant now receives both streams: video and screen sharing
Call API throw typed errors
Autohold logic is changed — if an incoming call arrives while the user is in active call, the active call is put on hold when an incoming call is answered
Call transfer is now available via the CallManager.transferCall API
Call events changes:
CallEvent.RemoteMediaAdded and CallEvent.RemoteMediaRemoved are triggered when a remote audio or video stream is available
CallEvents.ActiveUpdated event is removed. To handle the hold status of a call, use the Call.isOnHold watchable property
CallEvents.SharingStopped event is removed. To handle any stream end, use the StreamEvent.Ended event from the
Streamsmodule. If the DeviceTrackerHelper is attached to a call, screen sharing stop event is going to be handled and processed automatically by the SDK.
Before you start using the Call module API, you should register the Call module via the Core.registerModules API:
import { streamToken, StreamLoader } from '@voximplant/websdk/modules/stream';
import { callToken, CallLoader } from '@voximplant/websdk/modules/call-manager';
const core = Core.init({});
core.registerModules([StreamLoader(), CallLoader()]);
the Streams module is a required dependency for the Call module. If the Streams module is not registered, Call module registration fails with a DependencyMissingError.
CallManager interface is the entry point to the Call module. It provides the API to create an outgoing call, handle incoming calls, and get all currently active calls.
Start an outgoing audio call
From v4:
const call = sdk.call({
number: $DESTINATION
})
To v5:
const callManager = core.getModule(callToken)!;
const streamModule = core.getModule(streamToken)!;
const deviceTracker = streamModule.createHelper(StreamHelper.DeviceTracker);
const call = callManager.createCall($DESTINATION);
deviceTracker.enableTracker();
deviceTracker.attachCall(call);
await call.start();
Start an outgoing video call
From v4:
const call = sdk.call({
number: $DESTINATION
video: {
sendVideo: true,
receiveVideo: true,
},
})
To v5:
const callManager = core.getModule(callToken)!;
const streamModule = core.getModule(streamToken)!;
const deviceTracker = streamModule.createHelper(StreamHelper.DeviceTracker);
const call = callManager.createCall($DESTINATION);
deviceTracker.enableTracker();
deviceTracker.attachCall(call);
deviceTracker.shouldSendVideo.value = true;
await call.start();
Handle incoming calls
From v4:
sdk.on(VoxImplant.Events.IncomingCall, (e) => {
const { call } = e;
// add listeners to call
processCallEvents(call);
call.answer()
});
To v5:
const callManager = core.getModule(callToken);
const streamModule = core.getModule(streamToken);
callManager.addEventListener(
CallManagerEvent.IncomingCall,
({ payload }) => {
const call = sdk.callManager.getCalls().get(payload.callId);
const deviceTracker = streamModule.createHelper(StreamHelper.DeviceTracker);
deviceTracker.enableTracker();
deviceTracker.attachCall(call);
// add listeners to call
processCallEvents(call);
call.answer();
}
)
Render remote audio
Voximplant Web SDK v5 has changed the approach to render local and remote streams. To render an audio stream, you should create an AudioRenderer instance via the RendererManager.createAudioRenderer API and mount it yourself to the DOM. The RendererManager interface provides the API to manage existing renderers.
To handle the audio stream events, subscribe to the CallEvent.RemoteMediaAdded event via the Call.addEventListener API. Use the CallRemoteMediaAddedPayload.type property to determine the stream type.
Do not create audio renderers for local audio streams. Local audio streams are used to send audio to a call and should not be played for the current user.
const remoteAudioContainer = document.getElementById('remote_audio_holder');
call.remoteStreams.watch( remoteStreamsSet => {
remoteVideoContainer.replaceChildren();
remoteStreamsSet.forEach( remoteStream => {
if ([StreamType.Audio, StreamType.ScreenAudio].includes(remoteStream.type)) {
const renderer = streamModule.rendererManager.createAudioRenderer(stream);
const audioElement = renderer.getElement();
remoteVideoContainer.appendChild(audioElement);
}
})
})
Render local and remote video
From v4:
await sdk.init({
...
localVideoContainerId: 'local_video_holder',
remoteVideoContainerId: 'remote_video_holder',
});
To v5:
const streamModule = core.getModule(streamToken);
const localVideoContainer = document.getElementById('local_video_holder');
call.localStreams.watch( localStreamsSet => {
localStreamsSet.forEach( localStream => {
if ([StreamType.Video, StreamType.ScreenVideo].includes(localStream.type)) {
const renderer = streamModule.rendererManager.createVideoRenderer(stream);
const videoElement = renderer.getElement();
localVideoContainer.appendChild(videoElement);
}
})
})
const remoteVideoContainer = document.getElementById('remote_video_holder');
call.remoteStreams.watch( remoteStreamsSet => {
remoteVideoContainer.replaceChildren();
remoteStreamsSet.forEach( remoteStream => {
if ([StreamType.Video, StreamType.ScreenVideo].includes(remoteStream.type)) {
const renderer = streamModule.rendererManager.createVideoRenderer(stream);
const videoElement = renderer.getElement();
remoteVideoContainer.appendChild(videoElement);
}
})
})
Conferences
Voximplant Web SDK v5 has the separate classes that represent a call and a conference. It provides a clearer visibility of the SDK functionality for calls and conferences.
Notable changes:
Separated call and conference settings to start a call or a conference
Separated call and conferences events
Updated conference states that can be tracked via the Conference.state watchable
Conference API throw typed errors
Conference.endpoints watchable can be used as alternative to the ConferenceEvent.EndpointAdded and ConferenceEvent.EndpointRemoved events
Starting screen sharing in a conference now adds a new endpoint with the screen sharing stream
Conference.voiceAcivityDetected watchable notifies when a voice activity is detected on the current user in a conference
Endpoint.isMicrophoneMuted watchable notifies if a remote participant has muted its audio via the Conference.muteMicrophone API
Endpoint.enableAll API is replaced with the Endpoint.startReceivingAudio and Endpoint.startReceivingVideo API
Conference events changes:
Conference and Endpoint events do not provide the Conference/Endpoint instance that triggered an event, instead the conference/endpoint id is provided
EndpointEvent.RemoteMediaAdded and EndpointEvent.RemoteMediaRemoved events provide a RemoteStream instance. To render a remote audio or video stream, you should create an AudioRenderer or VideoRenderer and put it into the DOM
EndpointEvents.VoiceStart and EndpointEvents.VoiceEnd events are replaced with Endpoint.voiceActivityDetected watchable
EndpointEvents.MediaRenderEnabled event is replaced with EndpointEvent.StartReceivingAudioStream and EndpointEvent.StartReceivingVideoStream events
EndpointEvents.MediaRenderDisabled event is replaced with EndpointEvent.StopReceivingAudioStream and EndpointEvent.StopReceivingVideoStream events
Before you start using the Conference module API, you should register the Conference module via the Core.registerModules API:
import { streamToken, StreamLoader } from '@voximplant/websdk/modules/stream';
import { conferenceToken, ConferenceLoader } from '@voximplant/websdk/modules/conference-manager';
const core = Core.init({});
core.registerModules([StreamLoader(), ConferenceLoader()]);
Join a conference
From v4:
// prepare settings
const conferenceSettings = {
number: conferenceName,
simulcast: true,
video: {
sendVideo: true, // whether video is enabled
receiveVideo: true
},
};
const conference = sdk.callConference(conferenceSettings);
To v5:
const conferenceManager = core.getModule(conferenceToken)!;
const streamModule = core.getModule(streamToken)!;
const deviceTracker = streamModule.createHelper(StreamHelper.DeviceTracker);
const conferenceSettings = {
conferenceName: conferenceName,
// simulcast enabled by default
// no video settings here, just add video stream to conference
};
const conference = conferenceManager.createConference(conferenceSettings);
deviceTracker.enableTracker();
deviceTracker.attachConference(conference);
deviceTracker.shouldSendVideo.value = true; // if video should be enabled
await conference.join();
Render remote audio
Voximplant Web SDK v5 has changed the approach to render local and remote streams. To render an audio stream, you should create an AudioRenderer instance via the RendererManager.createAudioRenderer API and mount it yourself to the DOM. The RendererManager interface provides the API to manage existing renderers.
To handle the audio stream events, subscribe to the EndpointEvent.RemoteMediaAdded event via the Endpoint.addEventListener API. Use the EndpointRemoteMediaAddedPayload.type property to determine the stream type.
Do not create audio renderers for local audio streams. Local audio streams are used to send audio to a call and should not be played for the current user.
const streamModule = core.getModule(streamToken);
const localVideoContainer = document.getElementById('local_audio_holder');
const getRemoteAudioStreams = (conference) =>
Array.from(conference.endpoints.value.values()).flatMap((endpoint) => endpoint.getAnyAudioStreams());
const renderConferenceRemoteAudio = (conference) => {
const streams = getRemoteAudioStreams(conference);
streams.forEach((stream) => {
const audioRenderer = streamModule.rendererManager.createAudioRenderer(stream);
const audioElement = audioRenderer.getElement();
container.appendChild(audioElement);
});
};
conference.endpoints.watch((endpointsMap) => {
endpointsMap.forEach((endpoint) => {
endpoint.addEventListener(EndpointEvent.RemoteMediaAdded, () => {
renderConferenceRemoteAudio(conf);
});
endpoint.addEventListener(EndpointEvent.RemoteMediaRemoved, () => {
renderConferenceRemoteAudio(conf);
});
});
renderConferenceRemoteAudio(conf);
});
Render local and remote video
From v4:
await sdk.init({
...
localVideoContainerId: 'local_video_holder',
remoteVideoContainerId: 'remote_video_holder',
});
To v5:
const streamModule = core.getModule(streamToken);
const localVideoContainer = document.getElementById('local_video_holder');
conference.localStreams.watch( localStreamsSet => {
localStreamsSet.forEach( localStream => {
if ([StreamType.Video, StreamType.ScreenVideo].includes(localStream.type)) {
const renderer = streamModule.rendererManager.createVideoRenderer(stream);
const videoElement = renderer.getElement();
localVideoContainer.appendChild(videoElement);
}
})
})
const getRemoteVideoStreams = (conference) =>
Array.from(conf.endpoints.value.values()).flatMap((endpoint) => endpoint.getAnyVideoStreams());
const renderConferenceRemoteVideo = (conference) => {
localVideoContainer.replaceChildren();
const streams = getRemoteVideoStreams(conference);
streams.forEach((stream) => {
const videoRenderer = streamModule.rendererManager.createVideoRenderer(stream);
const videoElement = videoRenderer.getElement();
container.appendChild(videoElement);
});
};
conference.endpoints.watch((endpointsMap) => {
endpointsMap.forEach((endpoint) => {
endpoint.addEventListener(EndpointEvent.RemoteMediaAdded, () => {
renderConferenceRemoteVideo(conference);
});
endpoint.addEventListener(EndpointEvent.RemoteMediaRemoved, () => {
renderConferenceRemoteVideo(conference);
});
});
renderConferenceRemoteVideo(conference);
});
SmartQueue
Voximplant Web SDK API to manage the agent’s status in contact center workspace has moved to a separate SmartQueue module.
Before you start using the SmartQueue module API, you should register the SmartQueue module via the Core.registerModules API:
import { smartQueueToken, SmartQueueLoader } from '@voximplant/websdk/modules/smart-queue';
const core = Core.init({});
core.registerModules([SmartQueueLoader()]);
Notable changes:
SmartQueuemodule does not support ACD v1Smart queue statuses are divided into 3 enums and the SmartQueueStatus union:
- SmartQueueAgentStatus — statuses that can be set by an agent.
- SmartQueueSystemStatus — statuses that are automatically assigned and cannot be manually selected.
- SmartQueueCustomStatus — custom statuses that can be set by an agent.
The API to set an agent status do not return the promise; to handle the set status operation result, you should use the events or watchables.
The API to get an agent status are replaced with SmartQueue.callStatus and SmartQueue.messagingStatus watchable
If a Voximplant user is authenticated,
SmartQueuemodule automatically gets the current call and messaging statuses on the module initialization and if connection to the Voximplant Cloud has been restored after the network issues
Set call and messaging status
From v4:
const sdk = VoxImplant.getInstance();
await sdk.setOperatorACDStatus(newStatus);
await sdk.setOperatorSQMessagingStatus(newStatus);
To v5:
const smartQueue = core.getModule(startQueueToken);
smartQueue.setCallStatus(newStatus);
smartQueue.setMessagingStatus(newStatus);
Handle status change
From v4:
sdk.on(VoxImplant.Events.ACDStatusUpdated, (event) => {
// handle status change
});
sdk.on(VoxImplant.Events.SQMessagingStatusUpdated, (event) => {
// handle status change
});
To v5:
smartQueue.addEventListener(SmartQueueEvent.CallStatusUpdated, (event) => {
// handle status change
})
smartQueue.addEventListener(SmartQueueEvent.MessagingStatusUpdated, (event) => {
// handle status change
})
// Or with watchable smartqueue properties
smartQueue.callStatus.watch( (newStatus, oldStatus) => {
// handle status change
});
smartQueue.messagingStatus.watch( (newStatus, oldStatus) => {
// handle status change
});
Capture SDK logs
Voximplant Web SDK v5 provides a more flexible interface to capture SDK logs with extended information such as a log timestamp, time format selection, and a customizable prefix.
Log collection configuration is done for all modules at once and should be on the SDK initialization.
From v4:
const sdk = VoxImplant.getInstance();
await sdk.init({
showDebugInfo: false,
showWarnings: true,
});
sdk.setLoggerCallback((log) => {
// process a log message
});
To v5:
const core = Core.init({
logger: {
enableConsoleLogger: false,
prefix: 'MY_WEBSDK_APP',
timeFormat: TimeFormat.Timestamp, // change timestamp to unix time (ms)
callbackLogLevel: LogLevel.Warning, // send errors and warnings to callback
onLogCallback: (params) => {
// process a log message
}
},
});