Pi Coding Agent LLM analytics installation

Pi is an open-source coding agent that runs in your terminal. The @posthog/pi extension captures LLM generations, tool executions, and conversation traces as $ai_generation, $ai_span, and $ai_trace events and sends them to PostHog.

Prerequisites

You need:

Install the extension

Install the PostHog extension globally:

Terminal
pi install npm:@posthog/pi

Or for a project-local install:

Terminal
pi install -l npm:@posthog/pi

Configure PostHog

Set environment variables with your PostHog project API key and host. You can find these in your PostHog project settings.

Terminal
export POSTHOG_API_KEY="<ph_project_api_key>"
export POSTHOG_HOST="https://us.i.posthog.com"

Then start pi as normal:

Terminal
pi

The extension initializes automatically and captures events for every LLM call, tool execution, and completed agent run.

Tip: You can add these environment variables to your shell profile (e.g. ~/.zshrc or ~/.bashrc) so they persist across sessions.

Verify traces and generations

After running a few prompts through pi:

  1. Go to the LLM analytics tab in PostHog.
  2. You should see traces and generations appearing within a few minutes.

Configuration options

All configuration is done via environment variables:

VariableDefaultDescription
POSTHOG_API_KEY(required)Your PostHog project API key
POSTHOG_HOSThttps://us.i.posthog.comPostHog ingestion host
POSTHOG_PRIVACY_MODEfalseWhen true, LLM input/output content is not sent to PostHog. Token counts, costs, latency, and model metadata are still captured.
POSTHOG_ENABLEDtrueSet to false to disable the extension
POSTHOG_TRACE_GROUPINGmessagemessage: one trace per user prompt. session: group all generations in a session into one trace.
POSTHOG_SESSION_WINDOW_MINUTES60Minutes of inactivity before starting a new session window
POSTHOG_PROJECT_NAMEcwd basenameProject name included in all events
POSTHOG_AGENT_NAMEagent nameAgent name (auto-detects subagent names)
POSTHOG_TAGS(none)Custom tags added to all events (format: key1:val1,key2:val2)
POSTHOG_MAX_ATTRIBUTE_LENGTH12000Max length for serialized tool input/output attributes

Trace grouping modes

  • message (default): Each user prompt creates a new trace. Multiple LLM turns within one prompt (e.g., tool-use loops) are grouped under the same trace. Best for most use cases.
  • session: All generations within a session window are grouped into a single trace. A new trace starts after POSTHOG_SESSION_WINDOW_MINUTES of inactivity.

Privacy mode

When POSTHOG_PRIVACY_MODE=true, all LLM input/output content, user prompts, tool inputs, and tool outputs are redacted. Token counts, costs, latency, and model metadata are still captured.

Even with privacy mode off, sensitive keys in tool inputs/outputs (e.g. api_key, token, secret, password, authorization) are automatically redacted.

What gets captured

The extension captures three types of events:

  • $ai_generation — Every LLM call, including model, provider, token usage, cost, latency, and input/output messages (in OpenAI chat format).
  • $ai_span — Each tool execution (read, write, edit, bash, etc.), including tool name, input parameters, output result, and duration (learn more).
  • $ai_trace — Completed agent runs with aggregated token totals and latency (learn more).

Next steps

Now that you're capturing AI conversations, continue with the resources below to learn what else LLM analytics enables within the PostHog platform.

ResourceDescription
BasicsLearn the basics of how LLM calls become events in PostHog.
GenerationsRead about the $ai_generation event and its properties.
TracesExplore the trace hierarchy and how to use it to debug LLM calls.
SpansReview spans and their role in representing individual operations.
Analyze LLM performanceLearn how to create dashboards to analyze LLM performance.

Community questions

Was this page useful?

Questions about this page? or post a community question.