Class: Ace::Test::EndToEndRunner::Atoms::PromptBuilder
- Inherits:
-
Object
- Object
- Ace::Test::EndToEndRunner::Atoms::PromptBuilder
- Defined in:
- lib/ace/test/end_to_end_runner/atoms/prompt_builder.rb
Overview
Builds LLM prompts for E2E test execution
Creates a system prompt that instructs the LLM to execute a test scenario and return structured JSON results, along with the user prompt containing the test scenario content.
Constant Summary collapse
- TC_SYSTEM_PROMPT =
System prompt for TC-level (single test case) execution
<<~PROMPT You are an E2E test executor for the ACE (Agentic Coding Environment) toolkit. Your task is to execute a single test case in a pre-populated sandbox and return structured results. ## Instructions 1. The test sandbox is pre-populated at the path provided — do NOT create or modify the sandbox setup 2. Read the test case steps carefully 3. Execute the test case steps in the sandbox 4. Record pass/fail status 5. Return results as JSON ## Output Format You MUST return a JSON block wrapped in ```json fences with these fields: ```json { "test_id": "TS-XXX-NNN", "tc_id": "TC-NNN", "status": "pass|fail", "actual": "What actually happened", "notes": "Any additional observations", "summary": "Brief result" } ``` ## Rules - Execute ONLY the single test case provided - Execute in the pre-populated sandbox (do not modify setup files) - Record actual output/behavior, not just expected - If the test case cannot be executed (missing tool, permission error), mark as "fail" with explanation PROMPT
- SYSTEM_PROMPT =
<<~PROMPT You are an E2E test executor for the ACE (Agentic Coding Environment) toolkit. Your task is to execute the provided test scenario step by step and return structured results. ## Instructions 1. Read the test scenario carefully 2. Execute the Environment Setup commands 3. Create any Test Data as specified 4. Execute each Test Case (TC-NNN) in order 5. Record pass/fail status for each test case 6. Return results as JSON ## Output Format You MUST return a JSON block wrapped in ```json fences with these fields: ```json { "test_id": "TS-XXX-NNN", "status": "pass|fail|partial", "test_cases": [ { "id": "TC-001", "description": "Brief description", "status": "pass|fail", "actual": "What actually happened", "notes": "Any additional observations" } ], "summary": "Brief execution summary", "observations": "Any friction points or issues discovered" } ``` ## Rules - Execute ALL test cases, even if earlier ones fail - Record actual output/behavior, not just expected - Use "partial" status if some test cases pass and some fail - Include meaningful observations about tool behavior - If a test case cannot be executed (missing tool, permission error), mark as "fail" with explanation PROMPT
Instance Method Summary collapse
-
#build(scenario, test_cases: nil) ⇒ String
Build the user prompt for a test scenario.
-
#build_tc(test_case:, scenario:, sandbox_path:) ⇒ String
Build a TC-level user prompt for a single test case.
Instance Method Details
#build(scenario, test_cases: nil) ⇒ String
Build the user prompt for a test scenario
150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 |
# File 'lib/ace/test/end_to_end_runner/atoms/prompt_builder.rb', line 150 def build(scenario, test_cases: nil) filter_instruction = if test_cases&.any? "\n**IMPORTANT:** Execute ONLY the following test cases: #{test_cases.join(", ")}. Skip all other test cases.\n" else "" end pending_instruction = build_pending_tc_skip_instruction(scenario) execute_instruction = if test_cases&.any? "Execute only the specified test cases (#{test_cases.join(", ")}) and return the JSON results as specified in your instructions." else "Execute all test cases in this scenario and return the JSON results as specified in your instructions." end <<~PROMPT # Execute E2E Test: #{scenario.test_id} **Package:** #{scenario.package} **Title:** #{scenario.title} **Priority:** #{scenario.priority} #{filter_instruction}#{pending_instruction} ## Test Scenario #{scenario.content} --- #{execute_instruction} PROMPT end |
#build_tc(test_case:, scenario:, sandbox_path:) ⇒ String
Build a TC-level user prompt for a single test case
101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 |
# File 'lib/ace/test/end_to_end_runner/atoms/prompt_builder.rb', line 101 def build_tc(test_case:, scenario:, sandbox_path:) if test_case.pending? return <<~PROMPT # SKIP Test Case: #{scenario.test_id} / #{test_case.tc_id} **Package:** #{scenario.package} **Scenario:** #{scenario.title} **Test Case:** #{test_case.title} **Status:** PENDING — #{test_case.pending} This test case is marked as pending and should NOT be executed. Return the following JSON result: ```json { "test_id": "#{scenario.test_id}", "tc_id": "#{test_case.tc_id}", "status": "skip", "actual": "Skipped — pending", "notes": "#{test_case.pending}", "summary": "Pending: #{test_case.pending}" } ``` PROMPT end <<~PROMPT # Execute Test Case: #{scenario.test_id} / #{test_case.tc_id} **Package:** #{scenario.package} **Scenario:** #{scenario.title} **Test Case:** #{test_case.title} **Sandbox Path:** #{sandbox_path} ## Test Case Content #{test_case.content} --- Execute the test case steps in the sandbox at `#{sandbox_path}` and return JSON results as specified in your instructions. PROMPT end |