Skip to content

Feature request: Collect failed tests when uTests are run in parallel #5937

@bwbecker

Description

@bwbecker

When I run my test suite with testParallelism defaulting to true, the summary with two failing tests looks like this:

[205-08] Tests: 48, Passed: 48, Failed: 0
[205-08] 
[205/205, 1 failed] ============================== _solver.jvm.test ============================== 4s
1 tasks failed
_solver.jvm.test.testForked 2 tests failed: 
  oat.degreeAudit.parser.CourseDefTests oat.degreeAudit.parser.CourseDefTests.duplicate name
  oat.degreeAudit.parser.CourseSpecParserTests oat.degreeAudit.parser.CourseSpecParserTests.combining./\.b

To see the failing test outputs, I need to scroll back scanning hundreds of lines of output.

When testParallelism is set to false I get a much more convenient summary at the end but at a cost in execution time:

[205] ----------------------------------- Failures -----------------------------------
[205] X oat.degreeAudit.parser.CourseDefTests.duplicate name 1ms 
[205]   oatlibxp.verify.VerificationError: ToDo: Check for duplicate course defs.
[205]     oat.degreeAudit.parser.CourseDefTests$.fail(CourseDefTests.scala:9)
[205]     oat.degreeAudit.parser.CourseDefTests$.$init$$$anonfun$1$$anonfun$2(CourseDefTests.scala:30)
[205] X oat.degreeAudit.parser.CourseSpecParserTests.combining./\.b 0ms 
[205]   java.lang.AssertionError: assertion failed: ==> assertion failed: 1 != 2
[205]     oat.degreeAudit.parser.CourseSpecParserTests$.$init$$$anonfun$1$$anonfun$11$$anonfun$4$$anonfun$2(CourseSpecParserTests.scala:362)
[205] Tests: 199, Passed: 197, Failed: 2
[205] 
[205/205, 1 failed] ============================== _solver.jvm.test ============================== 3s
1 tasks failed
_solver.jvm.test.testForked 2 tests failed: 
  oat.degreeAudit.parser.CourseDefTests oat.degreeAudit.parser.CourseDefTests.duplicate name
  oat.degreeAudit.parser.CourseSpecParserTests oat.degreeAudit.parser.CourseSpecParserTests.combining./\.b

It seems like even when tests are run in parallel it should be possible to collect the final result from each JVM and present them together to avoid needing to scroll up goodness knows how far to find information about the failed test.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions