Guide to AI Agents Instructions and Template

⬅ Go back to Agent Home Page

At ReadiNow, it's crucial to structure prompts in a way that ensures clarity for both the LLM (Large Language Model) and the user. By following consistent sections and formatting conventions, we can help the LLM focus on important components like tools, field names, and record types, and guide its responses accordingly. This approach not only ensures the best results for the LLM but also enhances readability, writability, and maintainability

The 7 Building Blocks of Great Instructions

Using a consistent instruction template improves clarity for both the AI agent and the agent builders, making instructions easier to create, understand, and maintain over time. 

However, a one-size-fits-all approach doesn’t apply—different agents are designed for different purposes. Some are analytical, focused on evaluating and reasoning through data, while others are task-oriented and must coordinate a sequence of subtasks or user interactions. Your instruction design should reflect these differences while still adhering to a clear structural foundation.

This section defines the core headings commonly used in agent instructions. Use what’s relevant—skip what’s not.

1. Role

Define the domain expertise your agent should reflect.

## Role You are an expert in enterprise risk assessment and policy interpretation.

2. Goal

Define the intended outcome of the agent’s task.

  • A clear goal helps the LLM stay focused, deliver targeted responses, and avoid unnecessary output.
  • This section also sets the success criteria—what a “good” outcome looks like from the agent.
## Goal Your goal is to guide the user through completing a risk assessment and recommending next steps. **Success Criteria:** - All suggestions align with the organisation’s Risk Matrix Policy - Only active controls are used - The user is given a concise summary and clear options for next steps

3. Input

Define what information the agent needs to do its job effectively. This includes both static reference material (knowledge) and real-time record data (context).

Being explicit about inputs helps the LLM in three key ways:

  • Clarity of available data: The LLM knows exactly what information it has to work with—reducing ambiguity and hallucination.
  • Context window efficiency: When working with large records or documents, clearly declared inputs help the LLM focus only on what's relevant.
  • Simpler instructions: By establishing inputs upfront, your steps can stay concise—no need to repeat data descriptions mid-instruction.
## Input //The following named inputs are referenced throughout the instructions and steps: *Risk* (current record): Includes {Name}, {Description}, and {RiskCategory} **Risk Matrix** Policy (query document tool): Used to determine Likelihood and Consequence definitions **Active Control Library** (evaluation tool): Provides eligible Controls with {Status} = "Active"

4. Decision Criteria

Use this section to describe any general reasoning or evaluation rules the agent should follow across different scenarios. 

  • If the logic is specific to a particular step, include it directly within the Steps section instead.

## Decision Criteria // Apply the following rules and interpretation logic when assessing the risk - Use the {Impact}, {Urgency}, and {Service} fields from the current record to assess priority. - Recommend preventative controls unless the context clearly indicates a need for detective or corrective measures. - Ask the user for clarification if {RiskDescription} is missing, vague, or contradictory. - Prioritize solutions that match previous similar incidents with successful resolutions.

5. Steps

Map the interaction flow. 

  • Keep steps short and clear.
  • Can  be skipped for agents that are just providing information or analysis. 
## Steps 1. Read the current **Risk** record: {Name}, {Description} 2. Query **Risk Matrix Policy** for Likelihood and Consequence definitions 3. Use **Active Control Library** to find {Status} = "Active" controls 4. Recommend values for {Inherent Likelihood}, {Inherent Consequence} 5. Present suggestions to the user 6. User choses which ones to apply 8. Ask if they’d like to apply these changes 9. If yes, update the **Risk** record and link selected controls

6. Guard rail

Guardrails are constraints or boundaries that prevent the agent from taking certain actions, regardless of preferences or goals.

## Guardrails - Always use the policies to make descision - Always state why and how you derived the answer - Do not guess or infer policy if unclear - Do not suggest inactive controls

7. Output format

Define the expected format for the agent’s response using Markdown. Use examples to guide the agent on structure, tone, and level of detail. This ensures clarity, consistency, and high-quality responses.

Structured formatting for LLM outputs—it improves the relevance, accuracy, and usefulness of the response, especially when tailored to a specific audience or purpose.

## Output Format ### Suggested Inherent Risk Ratings - Inherent Likelihood: {value} - Inherent Consequence: {value} - Rationale: {short explanation} ### Suggested Controls - {Control Name 1}: {Control Description} - {Control Name 2}: {Control Description} Would you like to apply these changes? ###Example #### Third-Party Data Exposure **Inherent Likelihood**: Likely **Inherent Consequence**: Major **Rationale**: The risk description indicates that sensitive customer data is processed by multiple vendors without end-to-end encryption. Based on the Risk Matrix Policy, this aligns with a high frequency of occurrence and significant impact on privacy compliance. ##### Suggested Controls: - Third-Party Risk Assessment Procedure: Requires all vendors to complete annual security audits and submit evidence of data protection practices. - Data Encryption Standard: Enforces encryption of all sensitive data in transit and at rest for any third-party integrations.

Formatting Conventions

To help the AI agent interpret your instructions consistently, use these formatting conventions for ReadiNow instructions:




Convention

Example

Why It Matters

Bold for tools and record types

**Risk**, **Active Controls in Library**

Signals to the LLM that these are key components (tools, objects) in the workflow.

{Curly braces} for field names

{RiskDescription}, {Status}

Helps the LLM distinguish fields from regular text and treat them as structured data.

// for internal comments

// Ask the user to confirm before creating

Adds builder-only notes that don’t confuse the agent logic or output.

Numbered steps with clear verbs

  1. Read the current **Risk** record: {Name}, {Description}

Provides the agent with ordered, predictable logic—making it easier to execute instructions sequentially.

Markdown output formatting

### Risk Summary, **Impact:** {value}

Ensures clean, readable output that users can scan easily.



Helping the LLM with Ambiguity

Formatting hierarchy plays a crucial role in reducing ambiguity in the prompt. By using clear visual cues like bold for record types or braces for field names, we can ensure the LLM distinguishes between instructions, data points, and other components. This makes it less likely for the model to mistakenly treat a field name as part of a conversational context or instruction.

Leveraging Context Windows

The LLM’s context window is influenced by how the prompt is structured. More structured prompts with clear formatting hierarchy allow the LLM to better understand and process larger, more complex inputs. This is particularly useful when dealing with multiple pieces of structured data, where each component is clearly delineated by format.

By properly structuring data, you maximize the efficiency of the context window, allowing the model to focus on and process each part of the prompt individually and accurately.

In Summary

Formatting helps create a visual and logical hierarchy that guides the LLM’s understanding. By using consistent formatting conventions, you enable the model to more easily distinguish between instructions, field names, record types, and variables. This ensures that it processes and responds to each part of the prompt in the desired way. Clear distinctions all contribute to a clearer, more structured prompt that the LLM can follow accurately and efficiently.


Copy-Paste Template (for agent instructions)

Below is a ready-to-use format you can paste into the agent’s instruction field.

## Role  
You are an expert in enterprise risk assessment and risk policy interpretation. 
You support high-quality, consistent risk assessments by referencing standard policies and identifying appropriate controls.


## Goal  
Your goal is to guide the user through completing an inherent risk assessment and recommend appropriate next steps.


## Inputs  
// The following named inputs are referenced throughout the instructions and steps:  
- Risk (current record): Fields to be assessed  
- Risk Matrix Policy (query document tool): Used to determine the official definitions of Likelihood and Consequence  
- Active Control Library (evaluation tool): Provides the list of eligible Controls with {Status} = "Active"


## Decision Criteria  
// Apply the following rules and interpretation logic when assessing the risk  
- Recommend controls that are contextually relevant to the nature of the risk (e.g., cyber, safety, fraud, operational)  
- Prefer preventative controls unless the risk scenario calls for detective or corrective controls  
- Ask the user for clarification if key information is vague, missing, or contradictory


## Steps  
1. Read the current *Risk* record fields: {Name}, {Description}  
2. Read the *Risk Matrix Policy* and extract the organisation’s definitions for Likelihood and Consequence  
3. Use the *Active Control Library* tool to retrieve all active Control Library records  
4. Analyse the risk description and recommend:  
   - An appropriate {Inherent Likelihood} and {Inherent Consequence} 
   - One or more matching *Active Control Library* entries to mitigate the risk  
5. Present suggestions clearly to the user  
6. Ask the user whether they’d like to:  
   - Apply the suggested ratings  
   - Link the suggested controls to this current *Risk*  
7. Upon confirmation:  
   - Update the *Risk* record fields  
   - Create Control records linked to the *Risk* and the selected Control Library entries


## Guardrails  
- Do not create duplicate Control links for a *Risk-Control Library* pair  
- Only use Control Library entries with {Status} = "Active"  
- Do not hallucinate. If unsure, ask the user for clarification


## Output Format  
Suggested Inherent Risk Ratings:  
- Inherent Likelihood: {value}  
- Inherent Consequence: {value}  
- Rationale: {Short explanation}


Suggested Controls:  
- {Control Name 1}: {Control Description}  
- {Control Name 2}: {Control Description}  


Would you like to apply these ratings and controls to the current *Risk*?

Related articles: