Web Calls
Embed browser-based voice calls in your website so visitors can speak with your AI agent directly from the browser.
Web Calls
Web calls let your users talk to your voice agent directly from a web browser — no phone number needed. Using WebRTC, thinnestAI streams audio between the browser and your agent in real time, creating a seamless voice experience embedded in your website or app.
How Web Calls Work
User clicks "Call" button in your website
↓
Browser requests a WebRTC session from thinnestAI
↓
thinnestAI voice engine establishes audio stream
↓
User speaks → STT → AI Agent → TTS → User hears response
↓
Call ends when user hangs up or agent completesNo phone network is involved. Audio travels over the internet using WebRTC, which is supported by all modern browsers.
Starting a Web Call via API
Step 1: Create a Session Token
Before connecting from the browser, request a session token from your backend:
curl -X POST "https://api.thinnest.ai/voice/start" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"agent_id": "agent_xyz",
"context": {
"user_name": "Jane Doe",
"page": "pricing"
}
}'Response:
{
"session_id": "session_abc123",
"token": "eyJhbGciOiJIUzI1NiIs...",
"websocket_url": "wss://voice.thinnest.ai/ws/session_abc123"
}Step 2: Connect from the Browser
Use the session token to establish the WebRTC connection:
const response = await fetch('/api/start-call', { method: 'POST' });
const { token, websocket_url } = await response.json();
// Connect to the thinnestAI voice engine
const ws = new WebSocket(websocket_url);
// Set up WebRTC peer connection
const peerConnection = new RTCPeerConnection();
// Get user's microphone
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
stream.getTracks().forEach(track => {
peerConnection.addTrack(track, stream);
});
// Handle incoming audio from the agent
peerConnection.ontrack = (event) => {
const audio = new Audio();
audio.srcObject = event.streams[0];
audio.play();
};
// Exchange WebRTC signaling via WebSocket
ws.onmessage = async (event) => {
const message = JSON.parse(event.data);
if (message.type === 'offer') {
await peerConnection.setRemoteDescription(message.sdp);
const answer = await peerConnection.createAnswer();
await peerConnection.setLocalDescription(answer);
ws.send(JSON.stringify({ type: 'answer', sdp: answer }));
}
if (message.type === 'ice-candidate') {
await peerConnection.addIceCandidate(message.candidate);
}
};
// Send ICE candidates
peerConnection.onicecandidate = (event) => {
if (event.candidate) {
ws.send(JSON.stringify({
type: 'ice-candidate',
candidate: event.candidate
}));
}
};
// Authenticate
ws.onopen = () => {
ws.send(JSON.stringify({ type: 'auth', token }));
};Using the Embed Widget
For a faster integration, use the thinnestAI embed widget. It handles all WebRTC complexity for you.
Script Tag Integration
Add this to your website:
<script src="https://cdn.thinnest.ai/widget.js"></script>
<script>
ThinnestAI.init({
agentId: 'agent_xyz',
apiKey: 'YOUR_PUBLIC_KEY',
position: 'bottom-right',
theme: 'light',
greeting: 'Click to talk to our AI assistant'
});
</script>This renders a floating call button. When clicked, it connects to your voice agent.
React Component
If you are using React:
import { ThinnestVoiceWidget } from '@thinnest/embed-sdk';
function App() {
return (
<ThinnestVoiceWidget
agentId="agent_xyz"
apiKey="YOUR_PUBLIC_KEY"
position="bottom-right"
theme="light"
onCallStart={(sessionId) => console.log('Call started:', sessionId)}
onCallEnd={(sessionId, duration) => console.log('Call ended:', duration)}
/>
);
}Widget Configuration
| Option | Type | Default | Description |
|---|---|---|---|
agentId | string | required | Your voice agent ID |
apiKey | string | required | Your public API key |
position | string | bottom-right | Widget position: bottom-right, bottom-left |
theme | string | light | light or dark |
greeting | string | Talk to us | Text shown on the widget button |
primaryColor | string | #0066FF | Widget accent color |
autoOpen | boolean | false | Open the widget automatically |
context | object | {} | Key-value data passed to the agent |
Handling Call Events
WebSocket Events
The WebSocket connection sends events during the call:
ws.onmessage = (event) => {
const message = JSON.parse(event.data);
switch (message.type) {
case 'call_started':
console.log('Agent is ready');
updateUI('connected');
break;
case 'agent_speaking':
// Agent is currently speaking
updateUI('agent-speaking');
break;
case 'agent_listening':
// Agent is listening for user input
updateUI('listening');
break;
case 'transcript':
// Real-time transcript update
console.log(`${message.role}: ${message.text}`);
appendTranscript(message.role, message.text);
break;
case 'call_ended':
console.log('Call ended. Duration:', message.duration);
updateUI('ended');
cleanup();
break;
case 'error':
console.error('Call error:', message.message);
updateUI('error');
break;
}
};Widget Events
The embed widget emits events you can listen for:
ThinnestAI.on('call:start', (data) => {
console.log('Call started:', data.sessionId);
// Track in analytics
analytics.track('voice_call_started', { page: window.location.pathname });
});
ThinnestAI.on('call:end', (data) => {
console.log('Call ended. Duration:', data.duration);
analytics.track('voice_call_ended', { duration: data.duration });
});
ThinnestAI.on('call:error', (data) => {
console.error('Call failed:', data.error);
});
ThinnestAI.on('transcript', (data) => {
// Real-time transcript for display
console.log(`${data.role}: ${data.text}`);
});Building a Custom Call UI
If you want full control over the UI, build a custom interface using the thinnestAI JavaScript SDK.
Minimal Example
<!DOCTYPE html>
<html>
<head>
<title>Voice Call</title>
<style>
.call-container {
display: flex;
flex-direction: column;
align-items: center;
padding: 2rem;
font-family: sans-serif;
}
.status {
font-size: 1.2rem;
margin: 1rem 0;
color: #666;
}
.status.connected { color: #22c55e; }
.status.ended { color: #ef4444; }
button {
padding: 1rem 2rem;
font-size: 1rem;
border: none;
border-radius: 9999px;
cursor: pointer;
}
.call-btn { background: #22c55e; color: white; }
.hangup-btn { background: #ef4444; color: white; }
.transcript {
width: 100%;
max-width: 500px;
margin-top: 1rem;
padding: 1rem;
background: #f9f9f9;
border-radius: 8px;
max-height: 300px;
overflow-y: auto;
}
.transcript p { margin: 0.5rem 0; }
.transcript .agent { color: #2563eb; }
.transcript .user { color: #16a34a; }
</style>
</head>
<body>
<div class="call-container">
<h1>Talk to Our Agent</h1>
<p class="status" id="status">Ready to call</p>
<button class="call-btn" id="callBtn" onclick="startCall()">Start Call</button>
<button class="hangup-btn" id="hangupBtn" onclick="endCall()" style="display:none">End Call</button>
<div class="transcript" id="transcript"></div>
</div>
<script src="https://cdn.thinnest.ai/sdk.js"></script>
<script>
let callSession = null;
async function startCall() {
document.getElementById('status').textContent = 'Connecting...';
document.getElementById('callBtn').style.display = 'none';
callSession = await ThinnestAI.createCall({
agentId: 'agent_xyz',
apiKey: 'YOUR_PUBLIC_KEY'
});
callSession.on('connected', () => {
document.getElementById('status').textContent = 'Connected';
document.getElementById('status').className = 'status connected';
document.getElementById('hangupBtn').style.display = 'inline-block';
});
callSession.on('transcript', (data) => {
const p = document.createElement('p');
p.className = data.role;
p.textContent = `${data.role === 'agent' ? 'Agent' : 'You'}: ${data.text}`;
document.getElementById('transcript').appendChild(p);
});
callSession.on('ended', () => {
document.getElementById('status').textContent = 'Call ended';
document.getElementById('status').className = 'status ended';
document.getElementById('hangupBtn').style.display = 'none';
document.getElementById('callBtn').style.display = 'inline-block';
});
await callSession.connect();
}
function endCall() {
if (callSession) {
callSession.disconnect();
}
}
</script>
</body>
</html>Browser Compatibility
Web calls use WebRTC, which is supported by:
| Browser | Version | Notes |
|---|---|---|
| Chrome | 56+ | Full support |
| Firefox | 44+ | Full support |
| Safari | 11+ | Full support |
| Edge | 79+ | Full support (Chromium-based) |
| Mobile Chrome | 56+ | Full support |
| Mobile Safari | 11+ | Requires user gesture to start audio |
Microphone Permissions
The browser will prompt the user for microphone access. Handle the permission denied case:
try {
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
// Microphone access granted
} catch (err) {
if (err.name === 'NotAllowedError') {
alert('Please allow microphone access to start the call.');
} else {
alert('Could not access microphone. Please check your device settings.');
}
}Best Practices
-
Request microphone permission early — Do not wait until the user clicks "Call." Request permission on page load or after a preliminary interaction so the call starts instantly.
-
Show visual feedback — Indicate when the agent is speaking vs. listening. Users need to know when to talk.
-
Display a live transcript — Showing the conversation in text alongside the audio helps users follow along and improves accessibility.
-
Handle network issues — WebRTC connections can drop on poor networks. Implement reconnection logic and show clear error messages.
-
Test on mobile — Mobile browsers have stricter autoplay policies. Audio playback often requires a user gesture (tap/click) to start.
-
Provide a fallback — If WebRTC is not supported or the connection fails, offer a phone number as an alternative.
Next Steps
- Voice Configuration — Customize the voice your web agent uses
- Inbound Calls — Let customers call you on a phone number too
- Call Recording — Record web calls for quality assurance