You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+32-7Lines changed: 32 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -189,10 +189,35 @@ This marker system enables flexible test execution strategies, from quick smoke
189
189
-**Parallel test issues**: Make sure test data is unique per worker or run tests serially.
190
190
-**HTML report not generated**: Ensure `pytest-html` is installed and use the `--html` option.
191
191
---
192
-
## CI/CD & Automation
193
-
194
-
This project uses GitHub Actions for:
195
-
-**Code Analysis**: Automated linting and static analysis on every push/PR.
196
-
-**Test Execution**: Runs all tests and generates reports for every push/PR.
197
-
198
-
See the badges at the top of this README for the latest status of these workflows.
192
+
## CI/CD & GitHub Actions
193
+
194
+
This project uses GitHub Actions for automated workflows:
195
+
-**Code Analysis**: Runs linting and static analysis on every push and pull request.
196
+
-**Test Execution**: Runs all tests and generates reports for every push and pull request, including parallel execution and publishing Allure/HTML reports.
197
+
-**Badges**: See the top of this README for live status of these workflows.
198
+
199
+
Workflow files are located in `.github/workflows/`.
200
+
201
+
## Running the Test Execution Workflow (GitHub Actions)
202
+
203
+
### How to Trigger
204
+
- The workflow runs automatically on every push and pull request to the repository.
205
+
- You can also trigger it manually from the GitHub Actions tab by selecting the `Test Execution & Publish Report` workflow and clicking 'Run workflow'.
206
+
207
+
### What the Workflow Does
208
+
- Installs Python and all dependencies from `requirements.txt`.
209
+
- Runs all tests using pytest (including parallel execution with xdist).
210
+
- Publishes test results as HTML and Allure reports (if configured).
211
+
- Uploads the reports as workflow artifacts for download and review.
212
+
- Updates status badges at the top of the README to reflect the latest run.
213
+
214
+
### New: Post Results to Azure
215
+
- After test execution, the workflow now includes a step to post the test results to Azure (e.g., Azure DevOps, Azure Storage, or a custom API endpoint).
216
+
- This enables centralized reporting, dashboard integration, or further automation in your Azure environment.
217
+
- The step uses secure credentials and API endpoints configured in your repository secrets.
218
+
- You can customize the target Azure service and payload format as needed for your organization.
219
+
220
+
### Viewing Results
221
+
- Go to the 'Actions' tab in your GitHub repository.
222
+
- Select the latest run of `Test Execution & Publish Report`.
223
+
- Download the HTML/Allure report artifacts for detailed results.
<h2><iclass="fas fa-link"></i> Test Case Mapping & Result Collection</h2>
104
118
<ul>
105
-
<li><strong>Test Case Mapping:</strong> Each test function is mapped to a unique test case ID using <code>test-plan-suite.json</code> for traceability and reporting.</li>
106
-
<li><strong>Result Collection:</strong> Test results are collected for each test case, including outcome, duration (ms), and iteration details. Results are aggregated and written to <code>test-results/test-results-report.json</code> after each run, supporting both serial and parallel execution.</li>
119
+
<li><strong>Test Case Mapping:</strong> Each test function is mapped to a unique test case ID using
120
+
<code>test-plan-suite.json</code> for traceability and reporting.
121
+
</li>
122
+
<li><strong>Result Collection:</strong> Test results are collected for each test case, including
123
+
outcome, duration (ms), and iteration details. Results are aggregated and written to
124
+
<code>test-results/test-results-report.json</code> after each run, supporting both serial and
0 commit comments