Deploying Prompts
How to share prompts and make them available to your team and external systems.
Sharing and Deployment Settings
You've built the prompt. Now you need to decide who can use it and how.
Click into Sharing and Deployment to see three separate access controls. They're independent for good reason.
Chat Access
Controls who can use the prompt in Aisle's chat interface.
Options:
- Hidden - Nobody can see or use it (useful while testing)
- Private - Only people who can edit the prompt can use it
- Org-wide - Everyone in your organization can use it
- Shared with specific people - You pick exactly who has access
This is about usage. Can someone open the chat launcher and run this prompt?
Best practice:
- Start Hidden or Private while you develop and test
- Share with a small group for feedback and refinement
- Make it Org-wide once it's proven valuable
Library Access
Controls who can see and edit the prompt in the Prompts section. This is separate from chat access.
You might want everyone in your org to use a prompt in chat, but only your team to be able to modify it. That's what this separation enables.
Same options: Hidden, Private, Org-wide, or specific people. But now it's about who can view the prompt's configuration and make changes.
API Access
Turn this on and you get an endpoint. Now you can call this prompt from your codebase, a script, an automation tool, whatever.
How it works:
- Enable API access
- Get a URL and optional access key
- Make POST requests with your variable values
- Get back the model's response
Example use:
curl -X POST https://app.aisle.sh/api/prompts/run/abc123 \
-H "Authorization: Bearer YOUR_ACCESS_KEY" \
-H "Content-Type: application/json" \
-d '{"article": "Long article text here..."}'
The Aisle UI shows you the exact request format with your prompt's specific variables.
Why API Access Matters
This is how prompts move from "thing we test in a UI" to "infrastructure our product depends on."
The organizational problem it solves:
Typically, your business users develop AI workflows in one set of tools, while your developers implement AI features in applications using completely different tools and prompts. Improvements in one area don't benefit the other, and you end up with duplicate effort and inconsistent results.
With Aisle's API deployment:
- Business users develop and refine prompts through the chat interface
- Those exact same prompts immediately become available to developers for application integration
- Single source of truth for AI across your company
No need to maintain prompt text in multiple places. Your code calls the Aisle API, Aisle handles the rest.
Access Control Strategy
For internal tools everyone uses:
- Chat access: Org-wide
- Library access: Specific team (who maintains it)
- API access: Off (unless needed for integrations)
For experimental/beta features:
- Chat access: Specific people
- Library access: Same specific people
- API access: Off
For production-integrated features:
- Chat access: Org-wide or specific teams
- Library access: Dev team only
- API access: On with access key
For sensitive operations:
- Chat access: Specific people
- Library access: Specific people
- API access: On with access key, limited to specific systems
Logs
The Logs tab shows every execution:
- When did it run?
- How many tokens did it use?
- How long did it take?
- What were the inputs and outputs? (if data logging enabled)
Why logs matter:
This is how you spot problems. Maybe the prompt works great for short articles but fails on long ones. Maybe it's using way more tokens than expected. Logs tell you what's really happening versus what you thought would happen.
Privacy control:
You can disable data logging in prompt settings. When disabled, logs show execution metadata (when, how long, token count) but not the actual inputs and outputs. Useful for prompts processing sensitive information.
Distribution Strategy
Start Small
- Build and test privately
- Share with 2-3 colleagues for feedback
- Iterate based on their experience
- Expand to wider team
Promote What Works
- Pin valuable prompts (users can pin, admins can pin org-wide)
- Announce new prompts in team channels
- Show examples of good outputs
- Create documentation if the prompt is complex
Maintain Over Time
- Monitor logs to see usage patterns
- Update prompts when you discover better instructions
- Version control means you can iterate safely
- Deprecate prompts that aren't being used
Managing Multiple Prompts
As your organization builds more prompts:
Naming conventions - Use prefixes or suffixes to group related prompts:
- "Customer - Feedback Analysis"
- "Customer - Support Ticket Triage"
- "Customer - Email Response Draft"
Clear descriptions - Future you (and your teammates) need to quickly understand what each prompt does. Take 30 seconds to write a clear description.
Regular cleanup - Archive or delete prompts that aren't being used. Check logs—if a prompt hasn't been run in 3 months, maybe it's not valuable anymore.
Documentation - For complex prompts with specific use cases, maintain a doc explaining when to use them and what inputs work best.
Deployment Checklist
Before making a prompt org-wide:
- Tested with realistic data
- Tested edge cases
- Clear name and description
- Variables have sensible names
- Model and settings are appropriate
- Display message is set (if needed)
- Data logging configured appropriately
- Sharing settings match intended use
- 2-3 people have tried it and given feedback
- You've monitored initial logs for issues