Skip to content

Latest commit

Β 

History

History
132 lines (86 loc) Β· 6.23 KB

README.md

File metadata and controls

132 lines (86 loc) Β· 6.23 KB

PRs Welcome Discord LangWatch Python SDK version LangWatch TypeScript SDK version

LangWatch

LLMOps Platform | DSPy Visualizer | Monitoring | Evaluations | Analytics

LangWatch provides a suite of tools to track, visualize, and analyze interactions with LLMs focused on usability, helping both developers and non-technical team members to fine-tune performance and gain insights into user engagement.

https://langwatch.ai

langwatch

Features

  • ⚑️ Real-time Telemetry: Capture detailed interaction tracings for analytics for LLM cost, latency, and so on for further optimization.
  • πŸ› Detailed Debugging: Capture every step in the chain of your LLM calls, with all metadata and history, grouping by threads and user for easy troubleshooting and reproduction.
  • πŸ“ˆ Make LLM Quality Measurable: Stop relying on just feeling and use Evaluators to measure your LLM pipeline output quality with numbers using LangEvals evaluators to improve your pipelines, change prompts and switch models with confidence.
  • πŸ“Š DSPy Visualizer: Go a step further into finding the best prompts and pipelines automatically with DSPy optimizers, and plug it into LangWatch DSPy visualizer to very easily inspect and track the progress of your DSPy experiments, keeping the history and comparing runs to keep iterating.
  • ✨ Easier ~Vibe Checking~ too: Even though LangWatch helps grounding the quality into numbers and run automated experiments, a human look is still as important as ever. A clean, friendly interface focused on usability with automatic topic clustering, so you can deep dive on the messages being generated and really get a deep understanding of how your LLM is behaving, finding insights to iterate.
  • πŸš€ User Analytics: Metrics on engagement, user interactions and more insights into users behaviour so you can improve your product.
  • πŸ›‘οΈ Guardrails: Detect PII leak with Google DLP, toxic language with Azure Moderation and many others LangWatch Guardrails available to monitor your LLM outputs and trigger alerts. Build custom Guardrails yourself with semantic matching or another LLM on top evaluating the response.

Quickstart (OpenAI Python)

Install LangWatch library:

pip install langwatch

Then add the @langwatch.trace() decorator to the function that triggers your llm pipeline:

+ import langwatch

+ @langwatch.trace()
  def main():
      client = OpenAI()
      ...

Now, enable autotracking of OpenAI calls for this trace with autotrack_openai_calls():

  import langwatch

  @langwatch.trace()
  def main():
      client = OpenAI()
+     langwatch.get_current_trace().autotrack_openai_calls(client)

Next, you need to make sure to have LANGWATCH_API_KEY exported:

export LANGWATCH_API_KEY='your_api_key_here'

Set up your project on LangWatch to generate your API key.

That's it! All your LLM calls will now be automatically captured on LangWatch, for monitoring, analytics and evaluations.

For more advanced tracking and integration details of other languages like TypeScript and frameworks like LangChain, refer our documentation.

DSPy Visualizer Quickstart

Install LangWatch library:

pip install langwatch

Import and authenticate with your LangWatch key:

import langwatch

langwatch.login()

Before your DSPy program compilation starts, initialize langwatch with your experiment name and the optimizer to be tracked:

# Initialize langwatch for this run, to track the optimizer compilation
langwatch.dspy.init(experiment="my-awesome-experiment", optimizer=optimizer)

compiled_rag = optimizer.compile(RAG(), trainset=trainset)

That's it! Now open the link provided when the compilation starts or go to your LangWatch dashboard to follow the progress of your experiments:

DSPy Visualizer

Running Locally

You need to have docker and docker compose installed in your local environment to be able to run LangWatch locally. You are going to need 8-9GB of RAM to run the docker compose stack, make sure there is enough RAM available to docker when using Docker Desktop.

Then, it's two simple steps:

  1. Copy the langwatch/.env.example file to langwatch/.env

  2. Run docker compose up --build and open LangWatch at http://localhost:3000

Documentation

Detailed documentation is available to help you get the most out of LangWatch:

Self-Hosting

For a more complete guide on how to self-host LangWatch, please refer to the Self-Hosting section of the documentation.

Contributing

Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

Please read our Contribution Guidelines for details on our code of conduct, and the process for submitting pull requests.