The 100 million calls per month scale claim is ambitious. Most voice AI platforms struggle with consistent quality even at much smaller volumes due to infrastructure bottlenecks and model limitations. How do you handle the latency issues you mentioned, especially across 100+ languages? Cross-language models typically have higher inference times, and maintaining conversational flow becomes harder when you're supporting that many languages simultaneously. The 1M+ calls across 7 enterprises is solid traction, but that averages to about 143K calls per enterprise. Definitely going to test the platform to see how it handles more complex conversation flows beyond basic support scenarios.