Saving and Sharing Playgrounds
How to preserve your testing sessions and share results with your team.
What Gets Saved
When you save a playground, Aisle preserves everything about your testing session:
Column configurations - Which prompts you loaded, which versions you selected, which models you chose, all settings (temperature, max tokens, etc.)
Test inputs - All variable values you entered, any files you uploaded
Actual results - The complete outputs from each column, token usage, response times
This creates a complete snapshot. You can return to it later, pick up exactly where you left off, and all the outputs are still there.
Instead of trying to capture outputs in screenshots or copying text into docs, your entire test session is preserved as a shareable link. You're testing three different versions of a prompt. You need to show your team which one works best. Save the playground and share the link. Your team sees the exact same columns, inputs, and outputs.
Return to a saved playground weeks later and see exactly what you tested, with what inputs, and what the results were. When something breaks, you can compare a saved playground from when things worked to a new playground with the broken version. The difference tells you exactly what changed.
Saving a Playground
Click the save button in the playground interface. Give it a descriptive name.
Good names:
- "Customer Feedback Analyzer - v12 vs v13"
- "Article Summarizer - Model Comparison (GPT4, Claude, Gemini)"
- "Translation Prompt - Temperature Testing"
Bad names:
- "Playground 5"
- "Test"
- "New Playground"
You'll thank yourself later when you're looking through your saved playgrounds trying to find the one where you tested that specific thing.
Sharing with Your Team
Once saved, you get a shareable link. Anyone with access to your Aisle organization can view that playground.
Getting approval for prompt changes
You want to change a critical business prompt. Your manager needs to approve it.
Set up a playground with the current version and proposed new version. Run both with realistic inputs. Save it. Share the link.
Your manager reviews actual outputs, not your description of what changed. The approval conversation is concrete: "Look at column 2, line 3—that's exactly the format we need."
Collaborating on model selection
Your team is deciding which model to use for a new feature.
Set up a playground testing 3-4 models. Run them with representative inputs. Save and share.
Now the team meeting is productive. Everyone sees the same results. The decision is based on evidence, not opinions about which model "seems better."
Documenting technical decisions
Six months from now, someone will ask "Why did we choose Claude over GPT-4 for this prompt?"
If you saved the playground where you made that comparison, you can link directly to it. The evidence is still there. No need to remember or reconstruct your reasoning.
Teaching best practices
A new team member asks how to improve prompts effectively.
Show them a saved playground where you tested iterations. They see version 1, version 2, version 3 side-by-side. They understand what "better" looks like in practice, not theory.
Accessing Saved Playgrounds
Your saved playgrounds appear in the Playgrounds section. Filter by:
- Your own playgrounds
- Playgrounds shared with you
- Organization-wide playgrounds
Click any saved playground to open it. All the columns, inputs, and results load exactly as you left them.
When to Save
Before deploying changes - Document what you tested before you made the change. If something goes wrong, you have a reference point.
After finding a problem - Capture the broken behavior. This helps when you're fixing it later or explaining what went wrong to others.
During model evaluation - Save comprehensive model comparisons. Refer back when making similar decisions later.
For team reviews - Any time you need to show your work to colleagues, save first, then share.
Naming Strategy
As you accumulate saved playgrounds, organization matters.
Include the prompt name: "Email Response Generator - Model Test" or "Customer Feedback Analysis - v5 vs v6"
Include what you're testing: "Translation Prompt - Temperature Optimization" or "Article Summarizer - Long Form vs Short Form"
Include dates for time-sensitive tests: "Support Ticket Classifier - Q4 2024 Evaluation" or "Sentiment Analysis - New Model Test (Dec 2024)"
Managing Old Playgrounds
Delete playgrounds that are no longer relevant—one-off experiments that didn't lead anywhere, tests that are now outdated (models changed, prompts deleted), or superseded evaluations (you ran a better test later).
Keep playgrounds that provide long-term value: decision documentation (why you chose X over Y), baseline comparisons for key prompts, or training examples for team members.
A well-maintained playground library is a knowledge base of your team's prompt engineering decisions and best practices.