Skip to main content

Ops

An Op is a versioned, tracked function. When you decorate a function with @weave.op() (Python) or wrap it with weave.op() (TypeScript), Weave automatically captures its code, inputs, outputs, and execution metadata. Ops are the building blocks of tracing, evaluation scorers, and any tracked computation.
    @weave.op
    async def my_function(){
      ...  }

Calls

A Call is a logged execution of an Op. Every time an Op runs, Weave creates a Call that captures:
  • Input arguments
  • Output value
  • Timing and latency
  • Parent-child relationships (for nested calls)
  • Any errors that occurred
Calls show up as Traces in the Weave UI and provide the data for debugging, analysis, and evaluation. For the full Call object structure and properties, see the Call schema reference. Calls are similar to spans in the OpenTelemetry data model. A Call can:
  • Belong to a Trace (a collection of calls in the same execution context)
  • Have parent and child Calls, forming a tree structure

Traces

Traces are full trees of Calls that share the same execution context. Each Trace contains an ID (trace_id) you can use to retrieve the entire tree of Calls. Retrieving Call information using the Call’s id only returns data about the specified Call and none of its child Calls.

Threads

Threads are collections of traces related to a single session or conversation.  Threads can be used to do analysis or scoring on the [entire conversation] as a whole instead of individual Calls. The following diagram illustrates the relationships between threads, traces, and calls:
Thread: "session-abc"
  ├── Turn 1 (trace_id: aaa) → user says "Hi"
  │     ├── LLM call
  │     └── format response
  ├── Turn 2 (trace_id: bbb) → user says "What is the capitol of Paris?"
  │     ├── RAG retrieval
  │     ├── LLM call
  │     └── format response
  └── Turn 3 (trace_id: ccc) → user says "Thanks"
        └── LLM call