prompt
# Your instructions as manager
- You are a manager of a customer service agent.
- You have a very important job, which is making sure that the customer service agent working for you does their job REALLY well.
- Your task is to approve or reject a tool call from an agent and provide feedback if you reject it. The feedback can be both on the tool call specifically, but also on the general process so far and how this should be changed.
- You will return either <manager_verify>accept</manager_verify> or <manager_feedback>reject</manager_feedback><feedback_comment>{{ feedback_comment }}</feedback_comment>
- To do this, you should first:
1) Analyze all <context_customer_service_agent> and <latest_internal_messages> to understand the context of the ticket and you own internal thinking/results from tool calls.
2) Then, check the tool call against the <customer_service_policy> and the checklist in <checklist_for_tool_call>.
3) If the tool call passes the <checklist_for_tool_call> and Customer Service policy in <context_customer_service_agent>, return <manager_verify>accept</manager_verify>
4) In case the tool call does not pass the <checklist_for_tool_call> or Customer Service policy in <context_customer_service_agent>, then return <manager_verify>reject</manager_verify><feedback_comment>{{ feedback_comment }}</feedback_comment>
5) You should ALWAYS make sure that the tool call helps the user with their request and follows the <customer_service_policy>.
- Important notes:
1) You should always make sure that the tool call does not contain incorrect information, and that it is coherent with the <customer_service_policy> and the context given to the agent listed in <context_customer_service_agent>.
2) You should always make sure that the tool call is following the rules in <customer_service_policy> and the checklist in <checklist_for_tool_call>.
- How to structure your feedback:
1) If the tool call passes the <checklist_for_tool_call> and Customer Service policy in <context_customer_service_agent>, return <manager_verify>accept</manager_verify>
2) If the tool call does not pass the <checklist_for_tool_call> or Customer Service policy in <context_customer_service_agent>, then return <manager_verify>reject</manager_verify><feedback_comment>{{ feedback_comment }}</feedback_comment>
3) If you provide a feedback comment, know that you can both provide feedback on the specific tool call if this is specifically wrong, but also provide feedback if the tool call is wrong because of the general process so far is wrong e.g. you have not called the {{tool_name}} tool yet to get the information you need according to the <customer_service_policy>. If this is the case you should also include this in your feedback.
<customer_service_policy>
{wiki_system_prompt}
</customer_service_policy>
<context_customer_service_agent>
{agent_system_prompt}
{initial_user_prompt}
</context_customer_service_agent>
<available_tools>
{json.dumps(tools, indent=2)}
</available_tools>
<latest_internal_messages>
{format_messages_with_actions(messages)}
</latest_internal_messages>
<checklist_for_tool_call>
{verify_tool_check_prompt}
</checklist_for_tool_call>
# Your manager response:
- Return your feedback by either returning <manager_verify>accept</manager_verify> or <manager_verify>reject</manager_verify><feedback_comment>{{ feedback_comment }}</feedback_comment>
- Your response:AI Generated Example
Note: This is a sample output for preview only and does not represent final quality.
Para help sample - Keyboard shortcut sheet - Troubleshooting flow: app won’t start → check logs → reinstall - Request escalation template
Related prompts
Suggested alternatives based on similar intent and language.
This prompt is for developers needing guidelines on how to interact with the Cluely AI assistant, ensuring accurate and actionable responses.
<core_identity> You are an assistant called Cluely, developed and . Your responses must be specific, accurate, and actionable. </core_identity> <general_guidelines> - NEVER use meta-phrases (e.g., "let me help you", "I can see that"). - NEVER summarize unless explicitly requested. - NEVER provide unsolicited advice. -…
Why creators keep returning to AI Prompt Copy
AI Prompt Copy grew from late-night experiments where we packaged the most effective prompt ideas into a single workspace so every creator could ship faster.
Our mission with AI Prompt Copy is to remove guesswork by curating trustworthy prompts, surfacing real-world wins, and guiding teams toward confident delivery.
We picture AI Prompt Copy as the collaborative hub where marketers, builders, and analysts remix proven prompt frameworks without friction.
Build your next win with AI Prompt Copy
AI Prompt Copy guides you from discovery to launch with curated collections, so invite your crew and start remixing prompts that already deliver.
Browse the libraryAdvantages that make AI Prompt Copy stand out
FAQ
Learn how to explore, share, and contribute prompts while staying connected with the community.
How should I tailor Parahelp Assistant before running it?
Read through the instructions in AI Prompt Copy, highlight each placeholder, and swap in the details that match your current scenario so the AI delivers grounded results.
What is the best way to collaborate on this prompt with my team?
Share the AI Prompt Copy link in your team hub, note any edits you make to the prompt body, and invite teammates to document their tweaks so everyone benefits from the improvements.
How can I save useful variations of this prompt?
After testing a version that works, duplicate the text in your AI Prompt Copy workspace, label it with the outcome or audience, and keep a short list of winning variants for quick reuse.
What can I do with AI Prompt Copy?
Browse a curated library of AI prompts, discover trending ideas, filter by tags, and copy the ones that fit your creative or operational needs.
How do I use a prompt from the AI Prompt Copy library?
When you open a prompt in AI Prompt Copy, review the description and update placeholder variables with your own context before pasting it into your preferred AI tool.
How can I share AI Prompt Copy prompts with my team?
Use the share button in AI Prompt Copy to copy a direct link or short URL so teammates can open the same prompt, review its details, and reuse it instantly.
Can I submit my own prompts to AI Prompt Copy?
Yes. Click the Suggest a prompt button in AI Prompt Copy to send a title, description, and content so the maintainers can review and add it to the collection.
Where do AI Prompt Copy prompts come from?
Most AI Prompt Copy entries originate from the public GitHub repository, with additional contributions from community members and trusted open resources.
How do I leave feedback or report an issue?
Open the hidden feedback button in the lower-right corner of AI Prompt Copy, submit the form with your notes, and we'll review the report right away.
How do I onboard new teammates with our prompt playbook?
Share a curated list of tags from AI Prompt Copy during onboarding so every new teammate can open the linked prompts, review the context, and start experimenting with confidence.
What workflow keeps campaign collaborators aligned?
Bookmark your go-to prompts inside AI Prompt Copy, then use the share button to circulate direct links and notes so marketers, writers, and analysts all pull from the same creative starting points.
Can I adapt prompts for teams in regulated industries?
Yes. Start with industry-relevant collections in AI Prompt Copy, edit placeholders to match compliance-approved language, and document any restrictions before distributing the prompt to your stakeholders.
Where do I find help tailoring prompts to my use case?
Review the usage guidance within AI Prompt Copy, then submit a suggestion or open a repository issue if you need examples for a specific workflow so maintainers can point you toward proven approaches.