+
+
+ Your AI Gateway is now running
+
+
+
+
+
+
+
+
+ 1. Let's make a test request
+The gateway supports 250+ models across 36 AI providers. Choose your provider and API
+ key below.
+
+
+ 🐍 Python
+ 📦 Node.js
+ 🌀 cURL
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 2. Create a routing config
+Gateway configs allow you to route requests to different providers and models. You can load balance, set fallbacks, and configure automatic retries & timeouts. Learn more
+
+
+ Simple Config
+ Load Balancing
+ Fallbacks
+ Retries & Timeouts
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 3. More Features to Explore
+Discover advanced capabilities of the Portkey AI Gateway.
+
+
+
+
+ Agents
+Seamlessly integrate with popular agent frameworks like Autogen, CrewAI, and LangChain.
+ + + +Multi-modal AI
+Handle vision, audio, and image generation requests across multiple providers.
+ + + +Guardrails
+Verify LLM inputs and outputs with 20+ pre-built checks or build your own.
+ + + +Supported Providers
+Access 250+ models from 35+ providers including OpenAI, Anthropic, and Google.
+ +
+
+
+
+
+ Real-time Logs
+
+
+
+
+
+
+
+
+
+ Time | +Method | +Endpoint | +Status | +Duration | +Actions | +
---|---|---|---|---|---|
+ + Listening for logs... + | +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/src/services/realtimeLlmEventParser.ts b/src/services/realtimeLlmEventParser.ts
new file mode 100644
index 000000000..88415cc87
--- /dev/null
+++ b/src/services/realtimeLlmEventParser.ts
@@ -0,0 +1,160 @@
+import { Context } from 'hono';
+
+export class RealtimeLlmEventParser {
+ private sessionState: any;
+
+ constructor() {
+ this.sessionState = {
+ sessionDetails: null,
+ conversation: {
+ items: new Map