The unified MCP server approach addresses a real infrastructure problem. Having to reconnect integrations across multiple AI apps is tedious, and context window limits do force tough trade-offs between tool availability and performance. How does the dynamic tool retrieval handle latency? If you're fetching tool schemas on-demand rather than having them pre-loaded, that could introduce delays that break the conversational flow, especially for complex multi-step operations. Also curious about the execution model - when SimpliflowAI runs tools on behalf of the LLM, how do you handle authentication and permissions? Different tools might require different security contexts or user credentials.