Skip to main content

Prompt Logs

Every time a prompt runs, Aisle records the execution. You can see exactly what was passed in, what came back, which model was used, how many tokens it consumed, and where the run originated.

Logs are available to anyone with edit access to the prompt.

Accessing Logs

Open a prompt and click the Logs tab. You'll see a paginated list of all executions, most recent first.

What the Log Shows

Each row in the list shows:

ColumnDescription
TimestampWhen the run happened
Statussuccess, error, or cancelled
ModelThe model version used
SourceWhat triggered the run
Source linkLink to the originating chat, workflow, or playground (where available)

Click any row to open the full execution detail.

Execution Detail

The detail view shows:

Inputs - every variable passed to the prompt, including any file uploads. Long inputs are truncated in the view but the data is preserved.

Result - the full output. For errors, shows the error message.

Metadata:

  • Model and provider
  • Input tokens, output tokens, cache metrics
  • Source type and source ID
  • Access key used (for API runs)
  • User email (where the run is attributed to a user)

Source Types

The Source column tells you where the run originated:

SourceMeaning
chatRun from a chat conversation
workflowTriggered by a workflow node
apiCalled via API entry point
playgroundRun from a Playground
project_toolInvoked as a tool inside a Project chat
mcpCalled via the MCP execution server

Filtering

Use the filters above the list to narrow down:

  • Source - show only runs from a specific trigger type
  • Status - show only successes or errors
  • Version - show runs from a specific prompt version

This is useful for debugging: if a workflow started returning errors after a prompt was updated, filter by workflow source and the affected version to see exactly what changed.

Token Usage

The detail view shows token counts for each run. Use this to:

  • Understand cost per execution before scaling up
  • Spot unexpectedly large inputs that might indicate a bug upstream
  • Compare token usage across model versions when evaluating a model switch