Skip to content

Commit e0123ca

Browse files
committedJan 9, 2023
Introduce ContinueOnFailure for Ordered containers
Ordered containers that are also decorated with ContinueOnFailure will not stop running specs after the first spec fails. Also - this commit fixes a separate bug where timedout specs were not correctly treated as failures when determining whether or not to run AfterAlls in an Ordered container.
1 parent 89dda20 commit e0123ca

File tree

11 files changed

+390
-29
lines changed

11 files changed

+390
-29
lines changed
 

‎decorator_dsl.go

+12-2
Original file line numberDiff line numberDiff line change
@@ -46,22 +46,32 @@ const Pending = internal.Pending
4646

4747
/*
4848
Serial is a decorator that allows you to mark a spec or container as serial. These specs will never run in parallel with other specs.
49-
Tests in ordered containers cannot be marked as serial - mark the ordered container instead.
49+
Specs in ordered containers cannot be marked as serial - mark the ordered container instead.
5050
5151
You can learn more here: https://onsi.github.io/ginkgo/#serial-specs
5252
You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorator-reference
5353
*/
5454
const Serial = internal.Serial
5555

5656
/*
57-
Ordered is a decorator that allows you to mark a container as ordered. Tests in the container will always run in the order they appear.
57+
Ordered is a decorator that allows you to mark a container as ordered. Specs in the container will always run in the order they appear.
5858
They will never be randomized and they will never run in parallel with one another, though they may run in parallel with other specs.
5959
6060
You can learn more here: https://onsi.github.io/ginkgo/#ordered-containers
6161
You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorator-reference
6262
*/
6363
const Ordered = internal.Ordered
6464

65+
/*
66+
ContinueOnFailure is a decorator that allows you to mark an Ordered container to continue running specs even if failures occur. Ordinarily an ordered container will stop running specs after the first failure occurs. Note that if a BeforeAll or a BeforeEach/JustBeforeEach annotated with OncePerOrdered fails then no specs will run as the precondition for the Ordered container will consider to be failed.
67+
68+
ContinueOnFailure only applies to the outermost Ordered container. Attempting to place ContinueOnFailure in a nested container will result in an error.
69+
70+
You can learn more here: https://onsi.github.io/ginkgo/#ordered-containers
71+
You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorator-reference
72+
*/
73+
const ContinueOnFailure = internal.ContinueOnFailure
74+
6575
/*
6676
OncePerOrdered is a decorator that allows you to mark outer BeforeEach, AfterEach, JustBeforeEach, and JustAfterEach setup nodes to run once
6777
per ordered context. Normally these setup nodes run around each individual spec, with OncePerOrdered they will run once around the set of specs in an ordered container.

‎docs/index.md

+10-1
Original file line numberDiff line numberDiff line change
@@ -2264,7 +2264,11 @@ Lastly, the `OncePerOrdered` container cannot be applied to the `ReportBeforeEac
22642264

22652265
Normally, when a spec fails Ginkgo moves on to the next spec. This is possible because Ginkgo assumes, by default, that all specs are independent. However `Ordered` containers explicitly opt in to a different behavior. Spec independence cannot be guaranteed in `Ordered` containers, so Ginkgo treats failures differently.
22662266

2267-
When a spec in an `Ordered` container fails all subsequent specs are skipped. Ginkgo will then run any `AfterAll` node closures to clean up after the specs. This failure behavior cannot be overridden.
2267+
When a spec in an `Ordered` container fails all subsequent specs are skipped. Ginkgo will then run any `AfterAll` node closures to clean up after the specs.
2268+
2269+
You can override this behavior by decorating an `Ordered` container with `ContinueOnFailure`. This is useful in cases where `Ordered` is being used to provide shared expensive set up for a collection of specs. When `ContinueOnFailure` is set, Ginkgo will continue running specs even if an earlier spec in the `Ordered` container has failed. If, however a `BeforeAll` or `OncePerOrdered` `BeforeEach` node has failed then Ginkgo will skip all subsequent specs as the setup for the collection specs is presumed to have failed.
2270+
2271+
`ContinueOnFailure` can only be applied to the outermost `Ordered` container. It is an error to apply it to a nested container.
22682272

22692273
#### Combining Serial and Ordered
22702274

@@ -4819,6 +4823,11 @@ The `Ordered` decorator applies to container nodes only. It is an error to try
48194823
48204824
When a spec in an `Ordered` container fails, all subsequent specs in the ordered container are skipped. Only `Ordered` containers can contain `BeforeAll` and `AfterAll` setup nodes.
48214825
4826+
#### The ContinueOnFailure Decorator
4827+
The `ContinueOnFailure` decorator applies to outermost `Ordered` container nodes only. It is an error to try to apply the `ContinueOnFailure` decorator to anything other than an `Ordered` container - and that `Ordered` container must not have any parent `Ordered` containers.
4828+
4829+
When an `Ordered` container is decorated with `ContinueOnFailure` then the failure of one spec in the container will not prevent other specs from running. This is useful in cases where `Ordered` containers are being used to have share common (expensive) setup for a collection of specs but the specs, themselves, don't rely on one another.
4830+
48224831
#### The OncePerOrdered Decorator
48234832
The `OncePerOrdered` decorator applies to setup nodes only. It is an error to try to apply the `OncePerOrdered` decorator to a container or subject node.
48244833

‎dsl/decorators/decorators_dsl.go

+1
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ const Focus = ginkgo.Focus
3030
const Pending = ginkgo.Pending
3131
const Serial = ginkgo.Serial
3232
const Ordered = ginkgo.Ordered
33+
const ContinueOnFailure = ginkgo.ContinueOnFailure
3334
const OncePerOrdered = ginkgo.OncePerOrdered
3435
const SuppressProgressReporting = ginkgo.SuppressProgressReporting
3536

‎internal/group.go

+29-12
Original file line numberDiff line numberDiff line change
@@ -94,15 +94,19 @@ type group struct {
9494
runOncePairs map[uint]runOncePairs
9595
runOnceTracker map[runOncePair]types.SpecState
9696

97-
succeeded bool
97+
succeeded bool
98+
failedInARunOnceBefore bool
99+
continueOnFailure bool
98100
}
99101

100102
func newGroup(suite *Suite) *group {
101103
return &group{
102-
suite: suite,
103-
runOncePairs: map[uint]runOncePairs{},
104-
runOnceTracker: map[runOncePair]types.SpecState{},
105-
succeeded: true,
104+
suite: suite,
105+
runOncePairs: map[uint]runOncePairs{},
106+
runOnceTracker: map[runOncePair]types.SpecState{},
107+
succeeded: true,
108+
failedInARunOnceBefore: false,
109+
continueOnFailure: false,
106110
}
107111
}
108112

@@ -137,10 +141,14 @@ func (g *group) evaluateSkipStatus(spec Spec) (types.SpecState, types.Failure) {
137141
if !g.suite.deadline.IsZero() && g.suite.deadline.Before(time.Now()) {
138142
return types.SpecStateSkipped, types.Failure{}
139143
}
140-
if !g.succeeded {
144+
if !g.succeeded && !g.continueOnFailure {
141145
return types.SpecStateSkipped, g.suite.failureForLeafNodeWithMessage(spec.FirstNodeWithType(types.NodeTypeIt),
142146
"Spec skipped because an earlier spec in an ordered container failed")
143147
}
148+
if g.failedInARunOnceBefore && g.continueOnFailure {
149+
return types.SpecStateSkipped, g.suite.failureForLeafNodeWithMessage(spec.FirstNodeWithType(types.NodeTypeIt),
150+
"Spec skipped because a BeforeAll node failed")
151+
}
144152
beforeOncePairs := g.runOncePairs[spec.SubjectID()].withType(types.NodeTypeBeforeAll | types.NodeTypeBeforeEach | types.NodeTypeJustBeforeEach)
145153
for _, pair := range beforeOncePairs {
146154
if g.runOnceTracker[pair].Is(types.SpecStateSkipped) {
@@ -168,7 +176,8 @@ func (g *group) isLastSpecWithPair(specID uint, pair runOncePair) bool {
168176
return lastSpecID == specID
169177
}
170178

171-
func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
179+
func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) bool {
180+
failedInARunOnceBefore := false
172181
pairs := g.runOncePairs[spec.SubjectID()]
173182

174183
nodes := spec.Nodes.WithType(types.NodeTypeBeforeAll)
@@ -194,6 +203,7 @@ func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
194203
}
195204
if g.suite.currentSpecReport.State != types.SpecStatePassed {
196205
terminatingNode, terminatingPair = node, oncePair
206+
failedInARunOnceBefore = !terminatingPair.isZero()
197207
break
198208
}
199209
}
@@ -216,7 +226,7 @@ func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
216226
//this node has already been run on this attempt, don't rerun it
217227
return false
218228
}
219-
pair := runOncePair{}
229+
var pair runOncePair
220230
switch node.NodeType {
221231
case types.NodeTypeCleanupAfterEach, types.NodeTypeCleanupAfterAll:
222232
// check if we were generated in an AfterNode that has already run
@@ -246,9 +256,13 @@ func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
246256
if !terminatingPair.isZero() && terminatingNode.NestingLevel == node.NestingLevel {
247257
return true //...or, a run-once node at our nesting level was skipped which means this is our last chance to run
248258
}
249-
case types.SpecStateFailed, types.SpecStatePanicked: // the spec has failed...
259+
case types.SpecStateFailed, types.SpecStatePanicked, types.SpecStateTimedout: // the spec has failed...
250260
if isFinalAttempt {
251-
return true //...if this was the last attempt then we're the last spec to run and so the AfterNode should run
261+
if g.continueOnFailure {
262+
return isLastSpecWithPair || failedInARunOnceBefore //...we're configured to continue on failures - so we should only run if we're the last spec for this pair or if we failed in a runOnceBefore (which means we _are_ the last spec to run)
263+
} else {
264+
return true //...this was the last attempt and continueOnFailure is false therefore we are the last spec to run and so the AfterNode should run
265+
}
252266
}
253267
if !terminatingPair.isZero() { // ...and it failed in a run-once. which will be running again
254268
if node.NodeType.Is(types.NodeTypeCleanupAfterEach | types.NodeTypeCleanupAfterAll) {
@@ -281,10 +295,12 @@ func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
281295
includeDeferCleanups = true
282296
}
283297

298+
return failedInARunOnceBefore
284299
}
285300

286301
func (g *group) run(specs Specs) {
287302
g.specs = specs
303+
g.continueOnFailure = specs[0].Nodes.FirstNodeMarkedOrdered().MarkedContinueOnFailure
288304
for _, spec := range g.specs {
289305
g.runOncePairs[spec.SubjectID()] = runOncePairsForSpec(spec)
290306
}
@@ -301,8 +317,8 @@ func (g *group) run(specs Specs) {
301317
skip := g.suite.config.DryRun || g.suite.currentSpecReport.State.Is(types.SpecStateFailureStates|types.SpecStateSkipped|types.SpecStatePending)
302318

303319
g.suite.currentSpecReport.StartTime = time.Now()
320+
failedInARunOnceBefore := false
304321
if !skip {
305-
306322
var maxAttempts = 1
307323

308324
if g.suite.currentSpecReport.MaxMustPassRepeatedly > 0 {
@@ -327,7 +343,7 @@ func (g *group) run(specs Specs) {
327343
}
328344
}
329345

330-
g.attemptSpec(attempt == maxAttempts-1, spec)
346+
failedInARunOnceBefore = g.attemptSpec(attempt == maxAttempts-1, spec)
331347

332348
g.suite.currentSpecReport.EndTime = time.Now()
333349
g.suite.currentSpecReport.RunTime = g.suite.currentSpecReport.EndTime.Sub(g.suite.currentSpecReport.StartTime)
@@ -355,6 +371,7 @@ func (g *group) run(specs Specs) {
355371
g.suite.processCurrentSpecReport()
356372
if g.suite.currentSpecReport.State.Is(types.SpecStateFailureStates) {
357373
g.succeeded = false
374+
g.failedInARunOnceBefore = g.failedInARunOnceBefore || failedInARunOnceBefore
358375
}
359376
g.suite.selectiveLock.Lock()
360377
g.suite.currentSpecReport = types.SpecReport{}

0 commit comments

Comments
 (0)
Please sign in to comment.