Skip to content

Conversation

nicoddemus
Copy link
Member

@nicoddemus nicoddemus commented Sep 20, 2025

This PR copies the files from pytest-subtests and performs minimal integration.
I'm opening this to gauge whether everyone is on board with integrating this feature into the core.

Why?

Pros

  • subtests is a standard unittest feature, so it makes sense for pytest to support it as well.
  • Provides a simple alternative to parametrization.
  • Adds the ability to generate new test cases at runtime during the test execution, which is not possible with parametrization.
  • While it can exist as an external plugin, it requires many hacks, and better report integration is not easily achievable without core integration (off the top of my head: issues with terminal reporting, last failed and stepwise support, among others).

Cons

  • Adds another maintenance burden to the core.

TODO ✅

If everyone is on board, I will take the time this week to polish it and get it ready to merge ASAP:

  • Cleanup the implementation: Currently it relies on monkey-patching, which should no longer be needed since we can modify the core code directly.
  • Documentation: The feature should be documented as experimental -- meaning that while the feature itself is solid, we are still working out details such as how to best report subtest failures, integration with other plugins, etc.
  • Fix lint and testing failures (obviously).

Related

@nicoddemus nicoddemus changed the title [DRAFT] Integrate pytest subtests [DRAFT] Integrate pytest-subtests Sep 20, 2025
@Pierre-Sassoulas
Copy link
Member

I agree that this feature should be in pytest core because it's a feature in unittest and pytest should aim to be a drop in replacement for unittest (plus the original issue have 40 👍 and no 👎 at time of writing, overwhelming popular support in my book).

@RonnyPfannschmidt
Copy link
Member

I recall we need to fix marking the owning test case as failed if one subtest fails

@RonnyPfannschmidt
Copy link
Member

But yeah i want to see this in

@webknjaz
Copy link
Member

Sounds reasonable

@nicoddemus nicoddemus force-pushed the integrate-pytest-subtests branch from b43ab38 to 97ee032 Compare September 22, 2025 23:07
@nicoddemus nicoddemus force-pushed the integrate-pytest-subtests branch from 97ee032 to 6b5831f Compare September 26, 2025 13:34
@psf-chronographer psf-chronographer bot added the bot:chronographer:provided (automation) changelog entry is part of PR label Sep 26, 2025
@nicoddemus nicoddemus force-pushed the integrate-pytest-subtests branch 2 times, most recently from 5f56d81 to c93c0e0 Compare September 26, 2025 14:00
@nicoddemus nicoddemus marked this pull request as ready for review September 26, 2025 14:00
@nicoddemus nicoddemus changed the title [DRAFT] Integrate pytest-subtests Integrate pytest-subtests Sep 26, 2025
@nicoddemus
Copy link
Member Author

Ready for an initial review folks.

@nicoddemus nicoddemus force-pushed the integrate-pytest-subtests branch from c93c0e0 to b569c93 Compare September 29, 2025 22:17
Copy link
Member

@bluetech bluetech left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice to see this happening.

I ran out of time for the review for today, so didn't really get to the implementation parts, but already have some comments so submitting a partial review.

from _pytest.runner import check_interactive_exception


if TYPE_CHECKING:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need for TYPE_CHECKING for these imports

)


@dataclasses.dataclass
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this class be immutable? I.e. frozen=True, and dict -> Mapping.

Can also consider slots=True, and kw_only=True (which is nice for backward compat).


# Note: cannot use a dataclass here because Sphinx insists on showing up the __init__ method in the documentation,
# even if we explicitly use :exclude-members: __init__.
class SubTests:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider spelling this Subtests. This way it matches the fixture name subtests (rather than sub_tests). Generally the sub- prefix is not followed by a capital letter AFAIK. I understand it's inconsistent with unittest spelling subTest but that's on them :)

Note I don't feel strongly about this, if you prefer SubTests that's fine.


# Note: cannot use a dataclass here because Sphinx insists on showing up the __init__ method in the documentation,
# even if we explicitly use :exclude-members: __init__.
class SubTests:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another naming point, for some fixture types we use the Fixture suffix, e.g. CaptureFixture. But for some we don't (e.g. MonkeyPatch, Cache, Pytester). Perhaps the difference is whether the class only makes sense in a fixture context (like CaptureFixture) or if it's usable independently. For SubTests I believe it's strictly a fixture, so that would suggest SubTestsFixture. But that's a bit longer.

I wish we were consistent on this. I'll leave the decision to your good judgment.

class SubTests:
"""Subtests fixture, enables declaring subtests inside test functions via the :meth:`test` method."""

def __init__(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add _ispytest privacy check?


pytest allows for grouping assertions within a normal test, known as *subtests*.

Subtests are an alternative to parametrization, particularly useful when test setup is expensive or when the exact parametrization values are not known at collection time.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For expensive test setup, pytest has a solution in the form of scoped fixtures. Since regular parametrization is advantageous, e.g. allowing each param value to be tested individually by nodeid, I think we should not encourage subtests when there's a decent parametrization alternative.

The "values not known at collection time" case is what I think we should emphasize. There is indeed inherently no way to do "dynamic parametrization" after collection.

)


class TestSubTest:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
class TestSubTest:
class TestUnittestSubTest:



class TestSubTest:
"""Test.subTest functionality."""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"""Test.subTest functionality."""
"""Test subTest() functionality in unittest tests."""

Unless I misunderstood.



@pytest.mark.parametrize("mode", ["normal", "xdist"])
class TestFixture:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be nice to have a test for the combination of parametrization + subtests. I'm sure it works, but it's the sort of thing that I can see might break if a different implementation strategy was used.

---------------------------

While :ref:`traditional pytest parametrization <parametrize>` and ``subtests`` are similar, they have important differences and use cases.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Continuing from the comment above, I can see two ways we can approach this:

1 - Subtests are for "runtime parametrization"

Per comment above, subtests are useful when you have some data dynamically fetched in the test and want individualized reporting for each data value.

The idea here is that parametrization should be the go-to tool, but we offer this subtest tool for this particular scenario.

2 - Subtests are for "sub-testing"

By which I mean, subtests are for when you have one conceptual test, i.e. consider it a complete whole, but just want to break down its reporting to parts.


How do you see it? The reason I'm asking is that it can affect how we document the feature, what we recommend, etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bot:chronographer:provided (automation) changelog entry is part of PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants