LangSmith - Visibility While Building with Tracing
As the complexity of LLM-powered applications increases, understanding what’s happening under the hood becomes crucial—not just for debugging but for continuous optimization and ensuring system reliability. This is where LangSmith shines, providing developers with powerful tools to trace, visualize, and debug their AI workflows.
In this post, we'll explore how LangSmith enables deep observability in your applications through tracing, allowing for a more efficient and transparent development process.
Tracing with @traceable
The cornerstone of LangSmith’s tracing capabilities is the @traceable
decorator. This decorator is a simple and effective way to log detailed traces from your Python functions.
How it Works
By applying @traceable
to a function, LangSmith automatically generates a run tree each time the function is called. This tree links all function calls to the current trace, capturing essential information such as:
- Function inputs
- Function name
- Execution metadata
Furthermore, if the function raises an error or returns a response, LangSmith captures this and adds it to the trace. The result is sent to LangSmith in real-time, allowing you to monitor the health of your application. Importantly, this happens in a background thread, ensuring that your app’s performance remains unaffected.
This method is invaluable when debugging or identifying the root cause of an issue. The detailed trace data allows you to trace errors back to their source and quickly rectify problems in your codebase.
Code Example: Using @traceable
from langsmith.traceable import traceable
import random
# Apply the @traceable decorator to the function you want to trace
@traceable
def process_transaction(transaction_id, amount):
"""
Simulates processing a financial transaction.
"""
# Simulate processing logic
result = random.choice(["success", "failure"])
# Simulate an error for demonstration
if result == "failure":
raise ValueError(f"Transaction {transaction_id} failed due to insufficient funds.")
return f"Transaction {transaction_id} processed with amount {amount}."
# Call the function
try:
print(process_transaction(101, 1000)) # Expected to succeed
print(process_transaction(102, 2000)) # Expected to raise an error
except ValueError as e:
print(e)
Explanation:
- The
@traceable
decorator logs detailed traces each time theprocess_transaction
function is called. - Inputs such as
transaction_id
andamount
are automatically captured. - Execution metadata, such as the function name, is also logged.
- If an error occurs (as in the second transaction), LangSmith captures the error and associates it with the trace.
Adding Metadata for Richer Traces
LangSmith allows you to send arbitrary metadata along with each trace. This metadata is a set of key-value pairs that can be attached to your function runs, providing additional context. Some examples include:
- Version of the application that generated the run
- Environment in which the run occurred (e.g., development, staging, production)
- Custom data relevant to the trace
Metadata is especially useful when you need to filter or group runs in the LangSmith UI for more granular analysis. For instance, you could group traces by version to monitor how specific changes are impacting your system.
Code Example: Adding Metadata
from langsmith.traceable import traceable
@traceable(metadata={"app_version": "1.2.3", "environment": "production"})
def process_order(order_id, user_id, amount):
"""
Processes an order and simulates transaction completion.
"""
# Simulate order processing logic
if amount <= 0:
raise ValueError("Invalid order amount")
return f"Order {order_id} processed for user {user_id} with amount {amount}"
try:
print(process_order(101, 1001, 150))
print(process_order(102, 1002, -10)) # This will raise an error
except ValueError as e:
print(f"Error: {e}")
Explanation:
- The metadata parameter is added to the decorator, including the app version and environment.
- This metadata will be logged with the trace, allowing you to filter and group runs by these values in LangSmith’s UI.
LLM Runs for Chat Models
LangSmith offers special processing and rendering for LLM traces. To make full use of this feature, you need to log LLM traces in a specific format.
Input Format
For chat-based models, inputs should be logged as a list of messages, formatted in an OpenAI-compatible style. Each message must contain:
role
: the role of the message sender (e.g.,user
,assistant
)content
: the content of the message
Output Format
Outputs from your LLM can be logged in various formats:
- A dictionary containing
choices
, which is a list of dictionaries. Each dictionary must contain amessage
key with the message object (role and content). - A dictionary containing a
message
key, which maps to the message object. - A tuple/array with the role as the first element and content as the second element.
- A dictionary with
role
andcontent
directly.
Additionally, LangSmith allows for the inclusion of metadata such as:
ls_provider
: the model provider (e.g., "openai", "anthropic")ls_model_name
: the model name (e.g., "gpt-4o-mini", "claude-3-opus")
These fields help LangSmith identify the model and compute associated costs, ensuring that the tracking is precise.
LangChain and LangGraph Integration
LangSmith integrates seamlessly with LangChain and LangGraph, enabling advanced functionality in your AI workflows. LangChain provides powerful tools for managing LLM chains, while LangGraph offers a visual representation of your AI workflow. Together with LangSmith’s tracing tools, you can gain deep insights into how your chains and graphs are performing, making optimization easier.
Tracing Context Manager
Sometimes, you might want more control over the tracing process. This is where the Tracing Context Manager comes in. The context manager gives you the flexibility to log traces for specific blocks of code, especially when it's not feasible to use a decorator or wrapper.
Using the context manager, you can control the inputs, outputs, and other trace attributes within a specific scope. It integrates smoothly with the @traceable
decorator and other wrappers, allowing you to mix and match tracing strategies depending on your use case.
Code Example: Using the Tracing Context Manager
from langsmith.traceable import TraceContext
def complex_function(data):
# Start tracing specific block of code
with TraceContext() as trace:
# Simulate processing logic
result = sum(data)
trace.set_metadata({"data_size": len(data), "processing_method": "sum"})
return result
# Call the function
print(complex_function([1, 2, 3, 4, 5]))
Explanation:
- The
TraceContext
context manager is used to start tracing for a specific block of code (in this case, summing a list of numbers). - You can set additional metadata using
trace.set_metadata()
within the context. - This method gives you fine-grained control over where and when traces are logged, providing flexibility when you cannot use the
@traceable
decorator.
Conversational Threads
In many LLM applications, especially chatbots, tracking conversations across multiple turns is critical. LangSmith’s Threads feature allows you to group traces into a single conversation, maintaining context as the conversation progresses.
Grouping Traces
To link traces together, you’ll need to pass a special metadata key (session_id
, thread_id
, or conversation_id
) with a unique value (usually a UUID). This key ensures that all traces related to a particular conversation are grouped together, making it easy to track the progression of each interaction.
Summary
LangSmith empowers developers with unparalleled visibility into their applications, especially when working with LLMs. By leveraging the @traceable
decorator, adding rich metadata, and using advanced features like tracing context managers and conversational threads, you can optimize the performance, reliability, and transparency of your AI applications.
Whether you're building complex chat applications, debugging deep-seated issues, or simply monitoring your system’s health, LangSmith provides the tools necessary to ensure a smooth development process. Happy coding!