We've been using Helicone for the past few months. For us the benefits are: - not having to maintain our own proxy translation layer between models - latency, cost, and usage metrics are really helpful - easy debugging of when there is an AI failure and why - supports complex API uses like streaming, rich media, etc - minimal latency impact - friendly pricing (unlike competitors who sometimes take a cut of the model inference itself, which is bonkers) What it lacks (unless this has changed): - authentication layer. We still have to proxy every request to handle the authentication which incurs additional infra+compute cost. It is also an additional failure point. - model support rollout is badly lagged, such as GPT-5 taking 2-3 months to be available on Helicone. I understand this was a major API change on OpenAI's part (shame on them), but this will be unacceptably slow for many companies given OpenAI is a non-negotiable provider to support. Overall Helicone is an excellent product and I'm excited for what the future brings.