Skip to main content

Trace with OpenTelemetry

LangSmith supports OpenTelemetry-based tracing, allowing you to send traces from any OpenTelemetry-compatible application. This guide covers both automatic instrumentation for LangChain applications and manual instrumentation for other frameworks.

For self-hosted and EU region deployments

Update the LangSmith URL appropriately for self-hosted installations or organizations in the EU region in the requests below. For the EU region, use eu.api.smith.langchain.com.

Trace a LangChain application

If you're using LangChain or LangGraph, use the built-in integration to trace your application:

  1. Install the LangSmith package with OpenTelemetry support:

    pip install "langsmith[otel]"
    pip install langchain
    info

    Requires Python SDK version langsmith>=0.3.18.

  2. In your LangChain/LangGraph App, enable the OpenTelemetry integration by setting the LANGSMITH_OTEL_ENABLED environment variable:

    LANGSMITH_OTEL_ENABLED=true
    LANGSMITH_TRACING=true
    LANGSMITH_ENDPOINT=https://api.smith.langchain.com
    LANGSMITH_API_KEY=<your_langsmith_api_key>
  3. Create a LangChain application with tracing. For example:

    import os
    from langchain_openai import ChatOpenAI
    from langchain_core.prompts import ChatPromptTemplate

    # Create a chain
    prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
    model = ChatOpenAI()
    chain = prompt | model

    # Run the chain
    result = chain.invoke({"topic": "programming"})
    print(result.content)
  4. View the traces in your LangSmith dashboard (example)once your application runs.

Trace a non-LangChain application

For non-LangChain applications or custom instrumentation, you can trace your application in LangSmith with a standard OpenTelemetry client:

  1. Install the OpenTelemetry SDK, OpenTelemetry exporter packages, as well as the OpenAI package:

    pip install openai
    pip install opentelemetry-sdk
    pip install opentelemetry-exporter-otlp
  2. Setup environment variables for the endpoint, substitute your specific values:

    OTEL_EXPORTER_OTLP_ENDPOINT=https://api.smith.langchain.com/otel
    OTEL_EXPORTER_OTLP_HEADERS="x-api-key=<your langsmith api key>"

    Optional: Specify a custom project name other than "default":

    OTEL_EXPORTER_OTLP_ENDPOINT=https://api.smith.langchain.com/otel
    OTEL_EXPORTER_OTLP_HEADERS="x-api-key=<your langsmith api key>,Langsmith-Project=<project name>"
  3. Log a trace.

    This code sets up an OTEL tracer and exporter that will send traces to LangSmith. It then calls OpenAI and sends the required OpenTelemetry attributes.

    from openai import OpenAI
    from opentelemetry import trace
    from opentelemetry.sdk.trace import TracerProvider
    from opentelemetry.sdk.trace.export import (
    BatchSpanProcessor,
    )
    from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter

    client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

    otlp_exporter = OTLPSpanExporter(
    timeout=10,
    )
    trace.set_tracer_provider(TracerProvider())
    trace.get_tracer_provider().add_span_processor(
    BatchSpanProcessor(otlp_exporter)
    )
    tracer = trace.get_tracer(__name__)

    def call_openai():
    model = "gpt-4o-mini"
    with tracer.start_as_current_span("call_open_ai") as span:
    span.set_attribute("langsmith.span.kind", "LLM")
    span.set_attribute("langsmith.metadata.user_id", "user_123")
    span.set_attribute("gen_ai.system", "OpenAI")
    span.set_attribute("gen_ai.request.model", model)
    span.set_attribute("llm.request.type", "chat")
    messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {
    "role": "user",
    "content": "Write a haiku about recursion in programming."
    }
    ]

    for i, message in enumerate(messages):
    span.set_attribute(f"gen_ai.prompt.{i}.content", str(message["content"]))
    span.set_attribute(f"gen_ai.prompt.{i}.role", str(message["role"]))

    completion = client.chat.completions.create(
    model=model,
    messages=messages
    )

    span.set_attribute("gen_ai.response.model", completion.model)
    span.set_attribute("gen_ai.completion.0.content", str(completion.choices[0].message.content))
    span.set_attribute("gen_ai.completion.0.role", "assistant")
    span.set_attribute("gen_ai.usage.prompt_tokens", completion.usage.prompt_tokens)
    span.set_attribute("gen_ai.usage.completion_tokens", completion.usage.completion_tokens)
    span.set_attribute("gen_ai.usage.total_tokens", completion.usage.total_tokens)
    return completion.choices[0].message

    if __name__ == "__main__":
    call_openai()
  4. View the trace in your LangSmith dashboard (example).

Send traces to an alternate provider

While LangSmith is the default destination for OpenTelemetry traces, you can also configure OpenTelemetry to send traces to other observability platforms.

Use environment variables for global configuration

By default, the LangSmith OpenTelemetry exporter will send data to the LangSmith API OTEL endpoint (and to OTEL endpoint as well, if you configured global TracerProvider), but this can be customized by setting standard OTEL environment variables:

OTEL_EXPORTER_OTLP_ENDPOINT: Override the endpoint URL
OTEL_EXPORTER_OTLP_HEADERS: Add custom headers (LangSmith API keys and Project are added automatically)
OTEL_SERVICE_NAME: Set a custom service name (defaults to "langsmith")

LangSmith uses the HTTP trace exporter by default. If you'd like to use your own tracing provider, you can either:

  1. Set the OTEL environment variables as shown above, or
  2. Set a global trace provider before initializing LangChain components, which LangSmith will detect and use instead of creating its own.

Configure alternate OTLP endpoints

To send traces to a different provider, configure the OTLP exporter with your provider's endpoint:

import os
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

# Set environment variables for LangChain
os.environ["LANGSMITH_OTEL_ENABLED"] = "true"
os.environ["LANGSMITH_TRACING"] = "true"

# Configure the OTLP exporter for your custom endpoint
provider = TracerProvider()
otlp_exporter = OTLPSpanExporter(
# Change to your provider's endpoint
endpoint="https://otel.your-provider.com/v1/traces",
# Add any required headers for authentication
headers={"api-key": "your-api-key"}
)
processor = BatchSpanProcessor(otlp_exporter)
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)

# Create and run a LangChain application
prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
model = ChatOpenAI()
chain = prompt | model

result = chain.invoke({"topic": "programming"})
print(result.content)
info

To disable the hybrid behavior after 0.4.1 and send traces only to your OTEL endpoint and exclude sending to LangSmith, add an additional env var:

LANGSMITH_OTEL_ONLY = "true"

Supported OpenTelemetry attribute and event mapping

When sending traces to LangSmith via OpenTelemetry, the following attributes are mapped to LangSmith fields:

Core LangSmith attributes

Run Hierarchy Attributes

The following attributes that determine run hierarchy (langsmith.span.id, langsmith.trace.id, langsmith.span.dotted_order, langsmith.span.parent_id) should generally not be set manually when using OpenTelemetry. These are primarily used internally by the LangSmith SDK when tracing with OpenTelemetry. While setting these attributes can improve performance, it's not recommended for most use cases as they can interfere with proper run tree construction. For more details on how these attributes work, see the Run Data Format documentation.

OpenTelemetry attributeLangSmith fieldNotes
langsmith.trace.nameRun nameOverrides the span name for the run
langsmith.span.kindRun typeValues: llm, chain, tool, retriever, embedding, prompt, parser
langsmith.span.idRun IDUnique identifier for the span,
langsmith.trace.idTrace IDUnique identifier for the trace
langsmith.span.dotted_orderDotted orderPosition in the execution tree
langsmith.span.parent_idParent run IDID of the parent span
langsmith.trace.session_idSession IDSession identifier for related traces
langsmith.trace.session_nameSession nameName of the session
langsmith.span.tagsTagsCustom tags attached to the span (comma-separated)
langsmith.metadata.{key}metadata.{key}Custom metadata with langsmith prefix

GenAI standard attributes

OpenTelemetry attributeLangSmith fieldNotes
gen_ai.systemmetadata.ls_providerThe GenAI system (e.g., "openai", "anthropic")
gen_ai.operation.nameRun typeMaps "chat"/"completion" to "llm", "embedding" to "embedding"
gen_ai.promptinputsThe input prompt sent to the model
gen_ai.completionoutputsThe output generated by the model
gen_ai.prompt.{n}.roleinputs.messages[n].roleRole for the nth input message
gen_ai.prompt.{n}.contentinputs.messages[n].contentContent for the nth input message
gen_ai.prompt.{n}.message.roleinputs.messages[n].roleAlternative format for role
gen_ai.prompt.{n}.message.contentinputs.messages[n].contentAlternative format for content
gen_ai.completion.{n}.roleoutputs.messages[n].roleRole for the nth output message
gen_ai.completion.{n}.contentoutputs.messages[n].contentContent for the nth output message
gen_ai.completion.{n}.message.roleoutputs.messages[n].roleAlternative format for role
gen_ai.completion.{n}.message.contentoutputs.messages[n].contentAlternative format for content
gen_ai.tool.nameinvocation_params.tool_nameTool name, also sets run type to "tool"

GenAI request parameters

OpenTelemetry attributeLangSmith fieldNotes
gen_ai.request.modelinvocation_params.modelThe model name used for the request
gen_ai.response.modelinvocation_params.modelThe model name returned in the response
gen_ai.request.temperatureinvocation_params.temperatureTemperature setting
gen_ai.request.top_pinvocation_params.top_pTop-p sampling setting
gen_ai.request.max_tokensinvocation_params.max_tokensMaximum tokens setting
gen_ai.request.frequency_penaltyinvocation_params.frequency_penaltyFrequency penalty setting
gen_ai.request.presence_penaltyinvocation_params.presence_penaltyPresence penalty setting
gen_ai.request.seedinvocation_params.seedRandom seed used for generation
gen_ai.request.stop_sequencesinvocation_params.stopSequences that stop generation
gen_ai.request.top_kinvocation_params.top_kTop-k sampling parameter
gen_ai.request.encoding_formatsinvocation_params.encoding_formatsOutput encoding formats

GenAI usage metrics

OpenTelemetry attributeLangSmith fieldNotes
gen_ai.usage.input_tokensusage_metadata.input_tokensNumber of input tokens used
gen_ai.usage.output_tokensusage_metadata.output_tokensNumber of output tokens used
gen_ai.usage.total_tokensusage_metadata.total_tokensTotal number of tokens used
gen_ai.usage.prompt_tokensusage_metadata.input_tokensNumber of input tokens used (deprecated)
gen_ai.usage.completion_tokensusage_metadata.output_tokensNumber of output tokens used (deprecated)

TraceLoop attributes

OpenTelemetry attributeLangSmith fieldNotes
traceloop.entity.inputinputsFull input value from TraceLoop
traceloop.entity.outputoutputsFull output value from TraceLoop
traceloop.entity.nameRun nameEntity name from TraceLoop
traceloop.span.kindRun typeMaps to LangSmith run types
traceloop.llm.request.typeRun type"embedding" maps to "embedding", others to "llm"
traceloop.association.properties.{key}metadata.{key}Custom metadata with traceloop prefix

OpenInference attributes

OpenTelemetry attributeLangSmith fieldNotes
input.valueinputsFull input value, can be string or JSON
output.valueoutputsFull output value, can be string or JSON
openinference.span.kindRun typeMaps various kinds to LangSmith run types
llm.systemmetadata.ls_providerLLM system provider
llm.model_namemetadata.ls_model_nameModel name from OpenInference
tool.nameRun nameTool name when span kind is "TOOL"
metadatametadata.*JSON string of metadata to be merged

LLM attributes

OpenTelemetry attributeLangSmith fieldNotes
llm.input_messagesinputs.messagesInput messages
llm.output_messagesoutputs.messagesOutput messages
llm.token_count.promptusage_metadata.input_tokensPrompt token count
llm.token_count.completionusage_metadata.output_tokensCompletion token count
llm.token_count.totalusage_metadata.total_tokensTotal token count
llm.usage.total_tokensusage_metadata.total_tokensAlternative total token count
llm.invocation_parametersinvocation_params.*JSON string of invocation parameters
llm.presence_penaltyinvocation_params.presence_penaltyPresence penalty
llm.frequency_penaltyinvocation_params.frequency_penaltyFrequency penalty
llm.request.functionsinvocation_params.functionsFunction definitions

Prompt template attributes

OpenTelemetry attributeLangSmith fieldNotes
llm.prompt_template.variablesRun typeSets run type to "prompt", used with input.value

Retriever attributes

OpenTelemetry attributeLangSmith fieldNotes
retrieval.documents.{n}.document.contentoutputs.documents[n].page_contentContent of the nth retrieved document
retrieval.documents.{n}.document.metadataoutputs.documents[n].metadataMetadata of the nth retrieved document (JSON)

Tool attributes

OpenTelemetry attributeLangSmith fieldNotes
toolsinvocation_params.toolsArray of tool definitions
tool_argumentsinvocation_params.tool_argumentsTool arguments as JSON or key-value pairs

Logfire attributes

OpenTelemetry attributeLangSmith fieldNotes
promptinputsLogfire prompt input
all_messages_eventsoutputsLogfire message events output
eventsinputs/outputsLogfire events array, splits input/choice events

OpenTelemetry event mapping

Event nameLangSmith fieldNotes
gen_ai.content.promptinputsExtracts prompt content from event attributes
gen_ai.content.completionoutputsExtracts completion content from event attributes
gen_ai.system.messageinputs.messages[]System message in conversation
gen_ai.user.messageinputs.messages[]User message in conversation
gen_ai.assistant.messageoutputs.messages[]Assistant message in conversation
gen_ai.tool.messageoutputs.messages[]Tool response message
gen_ai.choiceoutputsModel choice/response with finish reason
exceptionstatus, errorSets status to "error" and extracts exception message/stacktrace

Event attribute extraction

For message events, the following attributes are extracted:

  • content → message content
  • role → message role
  • id → tool_call_id (for tool messages)
  • gen_ai.event.content → full message JSON

For choice events:

  • finish_reason → choice finish reason
  • message.content → choice message content
  • message.role → choice message role
  • tool_calls.{n}.id → tool call ID
  • tool_calls.{n}.function.name → tool function name
  • tool_calls.{n}.function.arguments → tool function arguments
  • tool_calls.{n}.type → tool call type

For exception events:

  • exception.message → error message
  • exception.stacktrace → error stacktrace (appended to message)

Implementation examples

Trace using the Traceloop SDK

The Traceloop SDK is an OpenTelemetry compatible SDK that covers a range of models, vector databases and frameworks. If there are integrations that you are interested in instrumenting that are covered by this SDK, you can use this SDK with OpenTelemetry to log traces to LangSmith.

To see what integrations are supported by the Traceloop SDK, see the Traceloop SDK documentation.

To get started, follow these steps:

  1. Install the Traceloop SDK and OpenAI:

    pip install traceloop-sdk
    pip install openai
  2. Configure your environment:

    TRACELOOP_BASE_URL=https://api.smith.langchain.com/otel
    TRACELOOP_HEADERS=x-api-key=<your_langsmith_api_key>

    Optional: Specify a custom project name other than "default":

    TRACELOOP_HEADERS=x-api-key=<your_langsmith_api_key>,Langsmith-Project=<langsmith_project_name>
  3. To use the SDK, you need to initialize it before logging traces:

    from traceloop.sdk import Traceloop
    Traceloop.init()
  4. Log a trace:

    import os
    from openai import OpenAI
    from traceloop.sdk import Traceloop

    client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
    Traceloop.init()

    completion = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {
    "role": "user",
    "content": "Write a haiku about recursion in programming."
    }
    ]
    )

    print(completion.choices[0].message)
  5. View the trace in your LangSmith dashboard (example).

Trace using the Arize SDK

With the Arize SDK and OpenTelemetry, you can log traces from multiple other frameworks to LangSmith. Below is an example of tracing CrewAI to LangSmith, you can find a full list of supported frameworks here. To make this example work with other frameworks, you just need to change the instrumentor to match the framework.

  1. Install the required packages:

    pip install -qU arize-phoenix-otel openinference-instrumentation-crewai crewai crewai-tools
  2. Set the following environment variables:

    OPENAI_API_KEY=<your_openai_api_key>
    SERPER_API_KEY=<your_serper_api_key>
  3. Before running any application code, set up our instrumentor (you can replace this with any of the frameworks supported here):

    from opentelemetry.sdk.trace import TracerProvider
    from opentelemetry.sdk.trace.export import BatchSpanProcessor
    from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter

    # Add LangSmith API Key for tracing
    LANGSMITH_API_KEY = "YOUR_API_KEY"
    # Set the endpoint for OTEL collection
    ENDPOINT = "https://api.smith.langchain.com/otel/v1/traces"
    # Select the project to trace to
    LANGSMITH_PROJECT = "YOUR_PROJECT_NAME"

    # Create the OTLP exporter
    otlp_exporter = OTLPSpanExporter(
    endpoint=ENDPOINT,
    headers={"x-api-key": LANGSMITH_API_KEY, "Langsmith-Project": LANGSMITH_PROJECT}
    )

    # Set up the trace provider
    provider = TracerProvider()
    processor = BatchSpanProcessor(otlp_exporter)
    provider.add_span_processor(processor)

    # Now instrument CrewAI
    from openinference.instrumentation.crewai import CrewAIInstrumentor
    CrewAIInstrumentor().instrument(tracer_provider=provider)
  4. Run a CrewAI workflow and the trace will automatically be logged to LangSmith:

    from crewai import Agent, Task, Crew, Process
    from crewai_tools import SerperDevTool

    search_tool = SerperDevTool()

    # Define your agents with roles and goals
    researcher = Agent(
    role='Senior Research Analyst',
    goal='Uncover cutting-edge developments in AI and data science',
    backstory="""You work at a leading tech think tank.
    Your expertise lies in identifying emerging trends.
    You have a knack for dissecting complex data and presenting actionable insights.""",
    verbose=True,
    allow_delegation=False,
    # You can pass an optional llm attribute specifying what model you wanna use.
    # llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7),
    tools=[search_tool]
    )
    writer = Agent(
    role='Tech Content Strategist',
    goal='Craft compelling content on tech advancements',
    backstory="""You are a renowned Content Strategist, known for your insightful and engaging articles.
    You transform complex concepts into compelling narratives.""",
    verbose=True,
    allow_delegation=True
    )

    # Create tasks for your agents
    task1 = Task(
    description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
    Identify key trends, breakthrough technologies, and potential industry impacts.""",
    expected_output="Full analysis report in bullet points",
    agent=researcher
    )

    task2 = Task(
    description="""Using the insights provided, develop an engaging blog
    post that highlights the most significant AI advancements.
    Your post should be informative yet accessible, catering to a tech-savvy audience.
    Make it sound cool, avoid complex words so it doesn't sound like AI.""",
    expected_output="Full blog post of at least 4 paragraphs",
    agent=writer
    )

    # Instantiate your crew with a sequential process
    crew = Crew(
    agents=[researcher, writer],
    tasks=[task1, task2],
    verbose= False,
    process = Process.sequential
    )

    # Get your crew to work!
    result = crew.kickoff()

    print("######################")
    print(result)
  5. View the trace in your LangSmith dashboard (example).

Advanced configuration

Use OpenTelemetry Collector for fan-out

info

Since langsmith ≥ 0.4.1, setting LANGSMITH_OTEL_ENABLED=true will by default send traces to both LangSmith and your OTEL endpoint (if you have global trace provider initiazlied). No extra code is needed for fan-out.

For more advanced scenarios, you can use the OpenTelemetry Collector to fan out your telemetry data to multiple destinations. This is a more scalable approach than configuring multiple exporters in your application code.

  1. Install the OpenTelemetry Collector for your environment.

  2. Create a configuration file (e.g., otel-collector-config.yaml) that exports to multiple destinations:

    receivers:
    otlp:
    protocols:
    grpc:
    endpoint: 0.0.0.0:4317
    http:
    endpoint: 0.0.0.0:4318

    processors:
    batch:

    exporters:
    otlphttp/langsmith:
    endpoint: https://api.smith.langchain.com/otel/v1/traces
    headers:
    x-api-key: ${env:LANGSMITH_API_KEY}
    Langsmith-Project: my_project

    otlphttp/other_provider:
    endpoint: https://otel.your-provider.com/v1/traces
    headers:
    api-key: ${env:OTHER_PROVIDER_API_KEY}

    service:
    pipelines:
    traces:
    receivers: [otlp]
    processors: [batch]
    exporters: [otlphttp/langsmith, otlphttp/other_provider]
  3. Configure your application to send to the collector:

    import os
    from opentelemetry import trace
    from opentelemetry.sdk.trace import TracerProvider
    from opentelemetry.sdk.trace.export import BatchSpanProcessor
    from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
    from langchain_openai import ChatOpenAI
    from langchain_core.prompts import ChatPromptTemplate

    # Point to your local OpenTelemetry Collector
    otlp_exporter = OTLPSpanExporter(
    endpoint="http://localhost:4318/v1/traces"
    )

    provider = TracerProvider()
    processor = BatchSpanProcessor(otlp_exporter)
    provider.add_span_processor(processor)
    trace.set_tracer_provider(provider)

    # Set environment variables for LangChain
    os.environ["LANGSMITH_OTEL_ENABLED"] = "true"
    os.environ["LANGSMITH_TRACING"] = "true"

    # Create and run a LangChain application
    prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
    model = ChatOpenAI()
    chain = prompt | model

    result = chain.invoke({"topic": "programming"})
    print(result.content)

This approach offers several advantages:

  • Centralized configuration for all your telemetry destinations
  • Reduced overhead in your application code
  • Better scalability and resilience
  • Ability to add or remove destinations without changing application code

Distributed tracing with LangChain and OpenTelemetry

Distributed tracing is essential when your LLM application spans multiple services or processes. OpenTelemetry's context propagation capabilities ensure that traces remain connected across service boundaries.

Context propagation in distributed tracing

In distributed systems, context propagation passes trace metadata between services so that related spans are linked to the same trace:

  • Trace ID: A unique identifier for the entire trace
  • Span ID: A unique identifier for the current span
  • Sampling Decision: Indicates whether this trace should be sampled

Set up distributed tracing with LangChain

To enable distributed tracing across multiple services:

import os
from opentelemetry import trace
from opentelemetry.propagate import inject, extract
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
import requests
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

# Set up OpenTelemetry trace provider
provider = TracerProvider()
otlp_exporter = OTLPSpanExporter(
endpoint="https://api.smith.langchain.com/otel/v1/traces",
headers={"x-api-key": os.getenv("LANGSMITH_API_KEY"), "Langsmith-Project": "my_project"}
)
processor = BatchSpanProcessor(otlp_exporter)
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
tracer = trace.get_tracer(__name__)

# Service A: Create a span and propagate context to Service B
def service_a():
with tracer.start_as_current_span("service_a_operation") as span:
# Create a chain
prompt = ChatPromptTemplate.from_template("Summarize: {text}")
model = ChatOpenAI()
chain = prompt | model

# Run the chain
result = chain.invoke({"text": "OpenTelemetry is an observability framework"})

# Propagate context to Service B
headers = {}
inject(headers) # Inject trace context into headers

# Call Service B with the trace context
response = requests.post(
"http://service-b.example.com/process",
headers=headers,
json={"summary": result.content}
)
return response.json()

# Service B: Extract the context and continue the trace
from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route("/process", methods=["POST"])
def service_b_endpoint():
# Extract the trace context from the request headers
context = extract(request.headers)

with tracer.start_as_current_span("service_b_operation", context=context) as span:
data = request.json
summary = data.get("summary", "")

# Process the summary with another LLM chain
prompt = ChatPromptTemplate.from_template("Analyze the sentiment of: {text}")
model = ChatOpenAI()
chain = prompt | model

result = chain.invoke({"text": summary})

return jsonify({"analysis": result.content})

if __name__ == "__main__":
app.run(port=5000)

Was this page helpful?


You can leave detailed feedback on GitHub.