-
Notifications
You must be signed in to change notification settings - Fork 321
Separate Unit and Integration Test Runs #3907
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Project runs will perform unit tests (i.e. MDS Unit and Functional suites). - Package runs will perform integration tests (i.e. MDS Manual suite).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This pull request optimizes CI/CD pipeline execution by separating unit and integration tests to reduce redundant test execution. The PR introduces a testType parameter that controls which test suites run in different pipeline contexts.
Changes:
- Added
testTypeparameter ('All', 'Unit', or 'Integration') to pipeline templates - Modified PR pipelines to run only relevant test suites (Unit tests in Project builds, Integration tests in Package builds)
- Updated test execution conditionals to respect the new testType parameter
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| eng/pipelines/sqlclient-pr-project-ref-pipeline.yml | Sets testType to 'Unit' for project reference builds, restricting to unit and functional tests |
| eng/pipelines/sqlclient-pr-package-ref-pipeline.yml | Sets testType to 'Integration' for package reference builds, restricting to manual tests |
| eng/pipelines/dotnet-sqlclient-ci-core.yml | Defines testType parameter with default 'All' for CI pipelines to continue running all tests |
| eng/pipelines/common/templates/steps/run-all-tests-step.yml | Implements conditional test execution based on testType parameter |
| eng/pipelines/common/templates/stages/ci-run-tests-stage.yml | Propagates testType parameter through stage template |
| eng/pipelines/common/templates/jobs/ci-run-tests-job.yml | Propagates testType parameter through job template |
Codecov Report✅ All modified and coverable lines are covered by tests.
Additional details and impacted files@@ Coverage Diff @@
## main #3907 +/- ##
===========================================
- Coverage 76.90% 28.84% -48.07%
===========================================
Files 269 263 -6
Lines 43246 66170 +22924
===========================================
- Hits 33260 19089 -14171
- Misses 9986 47081 +37095
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
benrr101
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, probably a good change. The main issue I have right now is the lumping of functional tests with unit tests. My understanding of them were that they were not unit tests in theory, they were just the simple integration tests that validate they do the most basic stuff. As it turns out a lot of those are unit tests, but not the majority. I was working on the side a bit to move the unit tests from the functional tests and into the unit test project. Ultimately I think we're aligned that we want: unit test, local integration test, remote integration test, stress test, and fuzz test projects. And this change is kinda orthogonal to those goals, without getting in the way of them. So go for it 😆
I think another low-ish hanging fruit is to separate the unit tests and functional tests as separate jobs, like the test sets. That way we don't need to repeat unit and functional tests four times for each platform. They're fast, yes, but not instantaneous, and I imagine we could shave a decent amount of time off a build doing it.
|
You're correct on all counts. I chose to combine the running of Unit and Functional tests because they're mostly unit with some local integration sprinkled in, but mainly because their combined run time is close to the Manual ones. This gives us the most bang-for-the-buck in terms of speeding up the PR runs right now. We also shouldn't be running Unit and Functional tests in any of the Azure SQL configurations - another future optimization. |
|
|
||
| # Include the code coverage job if the build type is Project. | ||
| ${{ if eq(parameters.buildType, 'Project') }}: | ||
| # Include the code coverage job under certain circumstances: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will continue to publish code coverage the same as we did before the split... for now. This is fragile and we should re-think how/when/why to publish coverage as we re-write these pipelines.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 7 out of 7 changed files in this pull request and generated 2 comments.
priyankatiwari08
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. Manual tests in itself takes quite significant amount of time to run. So, this segregation should help us in longer run.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 7 out of 7 changed files in this pull request and generated 2 comments.
Motivation
Our PR pipelines are taking too long to run, for many reasons that we can't fix all at once. The low hanging fruit right now is that we are running all MDS tests twice - once in the Project pipeline, and once in the Package pipeline. This is unnecessary, so we're splitting things up:
Reasoning
Testing
PR pipeline runs will confirm that tests are still passing. I will manually verify that the expected types of tests are running in the designated pipelines.