Replies: 1 comment
-
This is available today by using the excellent LiteLLM OSS proxy which natively integrates with Langfuse: |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Describe the feature or potential improvement
I would like to propose a new feature for Langfuse – the ability to operate in proxy mode for Ollama-based and other LLM applications. This enhancement would allow Langfuse to intercept and log requests and responses without requiring developers to manually instrument their application code.
By acting as a proxy between the application and Ollama (or other LLM services), Langfuse could provide observability, tracing, and cost tracking out of the box by simply routing traffic through the Langfuse instance.
Additional information
Why This Feature is Valuable:
Currently, integrating Langfuse into LLM applications requires developers to modify their code to wrap API calls or add observability logic. While this approach works well, it introduces some friction, particularly for:
A proxy mode would significantly lower the barrier to adoption by enabling observability with zero code changes to the application itself. Developers would simply update their endpoint to point to Langfuse, which would forward the traffic to the actual Ollama instance while logging all interactions.
Key Benefits for Developers:
Zero Code Changes for Observability
localhost:11434
tolocalhost:8080
for Langfuse proxy), eliminating the need to modify application logic.Unified Observability Without Instrumentation
Dynamic Request Routing and Load Balancing
Security and Access Control
Cost Monitoring and Performance Analytics
Simplified Experimentation (A/B Testing)
How This Could Work (Conceptual Flow):
Implementation Ideas:
Use Cases:
Conclusion:
A proxy mode for Langfuse would provide significant value by simplifying observability and reducing friction for developers integrating LLMs like Ollama. This feature aligns with Langfuse’s mission to enhance LLM observability, making it easier for teams to monitor, debug, and optimize their AI workflows with minimal overhead.
I believe this addition could greatly expand Langfuse’s adoption and usability across a wide range of LLM-powered applications. Thank you for considering this feature request!
Beta Was this translation helpful? Give feedback.
All reactions