Prompt Logs
Every time a prompt runs, Aisle records the execution. You can see exactly what was passed in, what came back, which model was used, how many tokens it consumed, and where the run originated.
Logs are available to anyone with edit access to the prompt.
Accessing Logs
Open a prompt and click the Logs tab. You'll see a paginated list of all executions, most recent first.
What the Log Shows
Each row in the list shows:
| Column | Description |
|---|---|
| Timestamp | When the run happened |
| Status | success, error, or cancelled |
| Model | The model version used |
| Source | What triggered the run |
| Source link | Link to the originating chat, workflow, or playground (where available) |
Click any row to open the full execution detail.
Execution Detail
The detail view shows:
Inputs - every variable passed to the prompt, including any file uploads. Long inputs are truncated in the view but the data is preserved.
Result - the full output. For errors, shows the error message.
Metadata:
- Model and provider
- Input tokens, output tokens, cache metrics
- Source type and source ID
- Access key used (for API runs)
- User email (where the run is attributed to a user)
Source Types
The Source column tells you where the run originated:
| Source | Meaning |
|---|---|
chat | Run from a chat conversation |
workflow | Triggered by a workflow node |
api | Called via API entry point |
playground | Run from a Playground |
project_tool | Invoked as a tool inside a Project chat |
mcp | Called via the MCP execution server |
Filtering
Use the filters above the list to narrow down:
- Source - show only runs from a specific trigger type
- Status - show only successes or errors
- Version - show runs from a specific prompt version
This is useful for debugging: if a workflow started returning errors after a prompt was updated, filter by workflow source and the affected version to see exactly what changed.
Token Usage
The detail view shows token counts for each run. Use this to:
- Understand cost per execution before scaling up
- Spot unexpectedly large inputs that might indicate a bug upstream
- Compare token usage across model versions when evaluating a model switch