Skip to content

Commit

Permalink
Introduce ContinueOnFailure for Ordered containers
Browse files Browse the repository at this point in the history
Ordered containers that are also decorated with ContinueOnFailure will not stop running specs after the first spec fails.

Also - this commit fixes a separate bug where timedout specs were not correctly treated as failures when determining whether or not to run AfterAlls in an Ordered container.
onsi committed Jan 9, 2023
1 parent 89dda20 commit e0123ca
Showing 11 changed files with 390 additions and 29 deletions.
14 changes: 12 additions & 2 deletions decorator_dsl.go
Original file line number Diff line number Diff line change
@@ -46,22 +46,32 @@ const Pending = internal.Pending

/*
Serial is a decorator that allows you to mark a spec or container as serial. These specs will never run in parallel with other specs.
Tests in ordered containers cannot be marked as serial - mark the ordered container instead.
Specs in ordered containers cannot be marked as serial - mark the ordered container instead.
You can learn more here: https://onsi.github.io/ginkgo/#serial-specs
You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorator-reference
*/
const Serial = internal.Serial

/*
Ordered is a decorator that allows you to mark a container as ordered. Tests in the container will always run in the order they appear.
Ordered is a decorator that allows you to mark a container as ordered. Specs in the container will always run in the order they appear.
They will never be randomized and they will never run in parallel with one another, though they may run in parallel with other specs.
You can learn more here: https://onsi.github.io/ginkgo/#ordered-containers
You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorator-reference
*/
const Ordered = internal.Ordered

/*
ContinueOnFailure is a decorator that allows you to mark an Ordered container to continue running specs even if failures occur. Ordinarily an ordered container will stop running specs after the first failure occurs. Note that if a BeforeAll or a BeforeEach/JustBeforeEach annotated with OncePerOrdered fails then no specs will run as the precondition for the Ordered container will consider to be failed.
ContinueOnFailure only applies to the outermost Ordered container. Attempting to place ContinueOnFailure in a nested container will result in an error.
You can learn more here: https://onsi.github.io/ginkgo/#ordered-containers
You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorator-reference
*/
const ContinueOnFailure = internal.ContinueOnFailure

/*
OncePerOrdered is a decorator that allows you to mark outer BeforeEach, AfterEach, JustBeforeEach, and JustAfterEach setup nodes to run once
per ordered context. Normally these setup nodes run around each individual spec, with OncePerOrdered they will run once around the set of specs in an ordered container.
11 changes: 10 additions & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
@@ -2264,7 +2264,11 @@ Lastly, the `OncePerOrdered` container cannot be applied to the `ReportBeforeEac

Normally, when a spec fails Ginkgo moves on to the next spec. This is possible because Ginkgo assumes, by default, that all specs are independent. However `Ordered` containers explicitly opt in to a different behavior. Spec independence cannot be guaranteed in `Ordered` containers, so Ginkgo treats failures differently.

When a spec in an `Ordered` container fails all subsequent specs are skipped. Ginkgo will then run any `AfterAll` node closures to clean up after the specs. This failure behavior cannot be overridden.
When a spec in an `Ordered` container fails all subsequent specs are skipped. Ginkgo will then run any `AfterAll` node closures to clean up after the specs.

You can override this behavior by decorating an `Ordered` container with `ContinueOnFailure`. This is useful in cases where `Ordered` is being used to provide shared expensive set up for a collection of specs. When `ContinueOnFailure` is set, Ginkgo will continue running specs even if an earlier spec in the `Ordered` container has failed. If, however a `BeforeAll` or `OncePerOrdered` `BeforeEach` node has failed then Ginkgo will skip all subsequent specs as the setup for the collection specs is presumed to have failed.

`ContinueOnFailure` can only be applied to the outermost `Ordered` container. It is an error to apply it to a nested container.

#### Combining Serial and Ordered

@@ -4819,6 +4823,11 @@ The `Ordered` decorator applies to container nodes only. It is an error to try
When a spec in an `Ordered` container fails, all subsequent specs in the ordered container are skipped. Only `Ordered` containers can contain `BeforeAll` and `AfterAll` setup nodes.
#### The ContinueOnFailure Decorator
The `ContinueOnFailure` decorator applies to outermost `Ordered` container nodes only. It is an error to try to apply the `ContinueOnFailure` decorator to anything other than an `Ordered` container - and that `Ordered` container must not have any parent `Ordered` containers.
When an `Ordered` container is decorated with `ContinueOnFailure` then the failure of one spec in the container will not prevent other specs from running. This is useful in cases where `Ordered` containers are being used to have share common (expensive) setup for a collection of specs but the specs, themselves, don't rely on one another.
#### The OncePerOrdered Decorator
The `OncePerOrdered` decorator applies to setup nodes only. It is an error to try to apply the `OncePerOrdered` decorator to a container or subject node.
1 change: 1 addition & 0 deletions dsl/decorators/decorators_dsl.go
Original file line number Diff line number Diff line change
@@ -30,6 +30,7 @@ const Focus = ginkgo.Focus
const Pending = ginkgo.Pending
const Serial = ginkgo.Serial
const Ordered = ginkgo.Ordered
const ContinueOnFailure = ginkgo.ContinueOnFailure
const OncePerOrdered = ginkgo.OncePerOrdered
const SuppressProgressReporting = ginkgo.SuppressProgressReporting

41 changes: 29 additions & 12 deletions internal/group.go
Original file line number Diff line number Diff line change
@@ -94,15 +94,19 @@ type group struct {
runOncePairs map[uint]runOncePairs
runOnceTracker map[runOncePair]types.SpecState

succeeded bool
succeeded bool
failedInARunOnceBefore bool
continueOnFailure bool
}

func newGroup(suite *Suite) *group {
return &group{
suite: suite,
runOncePairs: map[uint]runOncePairs{},
runOnceTracker: map[runOncePair]types.SpecState{},
succeeded: true,
suite: suite,
runOncePairs: map[uint]runOncePairs{},
runOnceTracker: map[runOncePair]types.SpecState{},
succeeded: true,
failedInARunOnceBefore: false,
continueOnFailure: false,
}
}

@@ -137,10 +141,14 @@ func (g *group) evaluateSkipStatus(spec Spec) (types.SpecState, types.Failure) {
if !g.suite.deadline.IsZero() && g.suite.deadline.Before(time.Now()) {
return types.SpecStateSkipped, types.Failure{}
}
if !g.succeeded {
if !g.succeeded && !g.continueOnFailure {
return types.SpecStateSkipped, g.suite.failureForLeafNodeWithMessage(spec.FirstNodeWithType(types.NodeTypeIt),
"Spec skipped because an earlier spec in an ordered container failed")
}
if g.failedInARunOnceBefore && g.continueOnFailure {
return types.SpecStateSkipped, g.suite.failureForLeafNodeWithMessage(spec.FirstNodeWithType(types.NodeTypeIt),
"Spec skipped because a BeforeAll node failed")
}
beforeOncePairs := g.runOncePairs[spec.SubjectID()].withType(types.NodeTypeBeforeAll | types.NodeTypeBeforeEach | types.NodeTypeJustBeforeEach)
for _, pair := range beforeOncePairs {
if g.runOnceTracker[pair].Is(types.SpecStateSkipped) {
@@ -168,7 +176,8 @@ func (g *group) isLastSpecWithPair(specID uint, pair runOncePair) bool {
return lastSpecID == specID
}

func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) bool {
failedInARunOnceBefore := false
pairs := g.runOncePairs[spec.SubjectID()]

nodes := spec.Nodes.WithType(types.NodeTypeBeforeAll)
@@ -194,6 +203,7 @@ func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
}
if g.suite.currentSpecReport.State != types.SpecStatePassed {
terminatingNode, terminatingPair = node, oncePair
failedInARunOnceBefore = !terminatingPair.isZero()
break
}
}
@@ -216,7 +226,7 @@ func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
//this node has already been run on this attempt, don't rerun it
return false
}
pair := runOncePair{}
var pair runOncePair
switch node.NodeType {
case types.NodeTypeCleanupAfterEach, types.NodeTypeCleanupAfterAll:
// check if we were generated in an AfterNode that has already run
@@ -246,9 +256,13 @@ func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
if !terminatingPair.isZero() && terminatingNode.NestingLevel == node.NestingLevel {
return true //...or, a run-once node at our nesting level was skipped which means this is our last chance to run
}
case types.SpecStateFailed, types.SpecStatePanicked: // the spec has failed...
case types.SpecStateFailed, types.SpecStatePanicked, types.SpecStateTimedout: // the spec has failed...
if isFinalAttempt {
return true //...if this was the last attempt then we're the last spec to run and so the AfterNode should run
if g.continueOnFailure {
return isLastSpecWithPair || failedInARunOnceBefore //...we're configured to continue on failures - so we should only run if we're the last spec for this pair or if we failed in a runOnceBefore (which means we _are_ the last spec to run)
} else {
return true //...this was the last attempt and continueOnFailure is false therefore we are the last spec to run and so the AfterNode should run
}
}
if !terminatingPair.isZero() { // ...and it failed in a run-once. which will be running again
if node.NodeType.Is(types.NodeTypeCleanupAfterEach | types.NodeTypeCleanupAfterAll) {
@@ -281,10 +295,12 @@ func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
includeDeferCleanups = true
}

return failedInARunOnceBefore
}

func (g *group) run(specs Specs) {
g.specs = specs
g.continueOnFailure = specs[0].Nodes.FirstNodeMarkedOrdered().MarkedContinueOnFailure
for _, spec := range g.specs {
g.runOncePairs[spec.SubjectID()] = runOncePairsForSpec(spec)
}
@@ -301,8 +317,8 @@ func (g *group) run(specs Specs) {
skip := g.suite.config.DryRun || g.suite.currentSpecReport.State.Is(types.SpecStateFailureStates|types.SpecStateSkipped|types.SpecStatePending)

g.suite.currentSpecReport.StartTime = time.Now()
failedInARunOnceBefore := false
if !skip {

var maxAttempts = 1

if g.suite.currentSpecReport.MaxMustPassRepeatedly > 0 {
@@ -327,7 +343,7 @@ func (g *group) run(specs Specs) {
}
}

g.attemptSpec(attempt == maxAttempts-1, spec)
failedInARunOnceBefore = g.attemptSpec(attempt == maxAttempts-1, spec)

g.suite.currentSpecReport.EndTime = time.Now()
g.suite.currentSpecReport.RunTime = g.suite.currentSpecReport.EndTime.Sub(g.suite.currentSpecReport.StartTime)
@@ -355,6 +371,7 @@ func (g *group) run(specs Specs) {
g.suite.processCurrentSpecReport()
if g.suite.currentSpecReport.State.Is(types.SpecStateFailureStates) {
g.succeeded = false
g.failedInARunOnceBefore = g.failedInARunOnceBefore || failedInARunOnceBefore
}
g.suite.selectiveLock.Lock()
g.suite.currentSpecReport = types.SpecReport{}
233 changes: 232 additions & 1 deletion internal/internal_integration/ordered_test.go
Original file line number Diff line number Diff line change
@@ -12,6 +12,7 @@ import (
)

const SKIP_DUE_TO_EARLIER_FAILURE = "Spec skipped because an earlier spec in an ordered container failed"
const SKIP_DUE_TO_FAILURE_IN_BEFORE_ALL = "Spec skipped because a BeforeAll node failed"
const SKIP_DUE_TO_BEFORE_ALL_SKIP = "Spec skipped because Skip() was called in BeforeAll"
const SKIP_DUE_TO_BEFORE_EACH_SKIP = "Spec skipped because Skip() was called in BeforeEach"

@@ -317,6 +318,38 @@ var _ = DescribeTable("Ordered Containers",
}, "A", HavePassed(),
"B", HavePanicked(types.FailureNodeInContainer, FailureNodeType(types.NodeTypeAfterAll), "boom"),
),
Entry("when a timeout occurs in a BeforeAll", false, func() {
BeforeEach(rt.T("BE-outer"))
Context("container", Ordered, func() {
BeforeEach(rt.T("BE"))
BeforeAll(rt.TSC("BA", func(ctx SpecContext) { <-ctx.Done() }), NodeTimeout(time.Millisecond*10))
It("A", rt.T("A"))
It("B", rt.T("B"))
AfterAll(rt.T("AA"))
AfterEach(rt.T("AE"))
})
AfterEach(rt.T("AE-outer"))
}, []string{"BE-outer", "BA", "AE", "AA", "AE-outer"},
"A", HaveTimedOut(types.FailureNodeInContainer, FailureNodeType(types.NodeTypeBeforeAll)),
"B", HaveBeenSkippedWithMessage(SKIP_DUE_TO_EARLIER_FAILURE),
),
Entry("when a timeout occurs in an AfterAll", false, func() {
BeforeEach(rt.T("BE-outer"))
Context("container", Ordered, func() {
BeforeEach(rt.T("BE"))
BeforeAll(rt.T("BA"))
It("A", rt.T("A"))
It("B", rt.T("B"))
AfterAll(rt.TSC("AA", func(ctx SpecContext) { <-ctx.Done() }), NodeTimeout(time.Millisecond*10))
AfterEach(rt.T("AE"))
})
AfterEach(rt.T("AE-outer"))
}, []string{
"BE-outer", "BA", "BE", "A", "AE", "AE-outer",
"BE-outer", "BE", "B", "AE", "AA", "AE-outer",
}, "A", HavePassed(),
"B", HaveTimedOut(types.FailureNodeInContainer, FailureNodeType(types.NodeTypeAfterAll)),
),
Entry("when a failure occurs in an AfterEach, it runs the AfterAll", false, func() {
BeforeEach(rt.T("BE-outer"))
Context("container", Ordered, func() {
@@ -938,7 +971,7 @@ var _ = DescribeTable("Ordered Containers",

It("A", rt.T("A"))
It("B", rt.T("B"))
Context("nested", func() {
Context("nested", Ordered, func() {
BeforeEach(rt.T("BE-I"))
AfterEach(rt.T("AE-I"))
It("C", rt.T("C"))
@@ -981,6 +1014,204 @@ var _ = DescribeTable("Ordered Containers",
},
"A", "B", "D", HavePassed(NumAttempts(3)), "C", HavePassed(NumAttempts(1)),
),
//Oh look, we made some new dragons
Entry("ContinueOnFailure - happy path when nothing fails", true, func() {
BeforeEach(rt.T("BE", DC("DC-BE")), OncePerOrdered)
AfterEach(rt.T("AE", DC("DC-AE")), OncePerOrdered)
Context("container", Ordered, ContinueOnFailure, func() {
BeforeAll(rt.T("BA", DC("DC-BA")))
AfterAll(rt.T("AA", DC("DC-AA")))
It("A", rt.T("A"))
It("B", rt.T("B"))
It("C", rt.T("C"))
})
}, []string{
"BE", "BA", "A", "B", "C", "AA", "AE", "DC-AE", "DC-BE", "DC-AA", "DC-BA",
},
"A", "B", "C", HavePassed(),
),
Entry("ContinueOnFailure - when the first It fails", false, func() {
BeforeEach(rt.T("BE", DC("DC-BE")), OncePerOrdered)
AfterEach(rt.T("AE", DC("DC-AE")), OncePerOrdered)
Context("container", Ordered, ContinueOnFailure, func() {
BeforeAll(rt.T("BA", DC("DC-BA")))
AfterAll(rt.T("AA", DC("DC-AA")))
It("A", rt.T("A", func() { F("fail") }))
It("B", rt.T("B"))
It("C", rt.T("C"))
})
}, []string{
"BE", "BA", "A", "B", "C", "AA", "AE", "DC-AE", "DC-BE", "DC-AA", "DC-BA",
},
"A", HaveFailed(types.FailureNodeIsLeafNode, FailureNodeType(types.NodeTypeIt), "fail"), "B", "C", HavePassed(),
),
Entry("ContinueOnFailure - when a middle It fails", false, func() {
BeforeEach(rt.T("BE", DC("DC-BE")), OncePerOrdered)
AfterEach(rt.T("AE", DC("DC-AE")), OncePerOrdered)
Context("container", Ordered, ContinueOnFailure, func() {
BeforeAll(rt.T("BA", DC("DC-BA")))
AfterAll(rt.T("AA", DC("DC-AA")))
It("A", rt.T("A"))
It("B", rt.T("B", func() { F("fail") }))
It("C", rt.T("C"))
})
}, []string{
"BE", "BA", "A", "B", "C", "AA", "AE", "DC-AE", "DC-BE", "DC-AA", "DC-BA",
},
"A", HavePassed(), "B", HaveFailed(types.FailureNodeIsLeafNode, FailureNodeType(types.NodeTypeIt), "fail"), "C", HavePassed(),
),
Entry("ContinueOnFailure - when the last It fails", false, func() {
BeforeEach(rt.T("BE", DC("DC-BE")), OncePerOrdered)
AfterEach(rt.T("AE", DC("DC-AE")), OncePerOrdered)
Context("container", Ordered, ContinueOnFailure, func() {
BeforeAll(rt.T("BA", DC("DC-BA")))
AfterAll(rt.T("AA", DC("DC-AA")))
It("A", rt.T("A"))
It("B", rt.T("B"))
It("C", rt.T("C", func() { F("fail") }))
})
}, []string{
"BE", "BA", "A", "B", "C", "AA", "AE", "DC-AE", "DC-BE", "DC-AA", "DC-BA",
},
"A", "B", HavePassed(), "C", HaveFailed(types.FailureNodeIsLeafNode, FailureNodeType(types.NodeTypeIt), "fail"),
),
Entry("ContinueOnFailure - when the first It fails and we have flake attempts", false, func() {
BeforeEach(rt.T("BE", DC("DC-BE")), OncePerOrdered)
AfterEach(rt.T("AE", DC("DC-AE")), OncePerOrdered)
Context("container", Ordered, ContinueOnFailure, FlakeAttempts(2), func() {
BeforeAll(rt.T("BA", DC("DC-BA")))
AfterAll(rt.T("AA", DC("DC-AA")))
It("A", rt.T("A", func() { F("fail") }))
It("B", rt.T("B"))
It("C", rt.T("C"))
})
}, []string{
"BE", "BA", "A", "A", "B", "C", "AA", "AE", "DC-AE", "DC-BE", "DC-AA", "DC-BA",
},
"A", HaveFailed(types.FailureNodeIsLeafNode, FailureNodeType(types.NodeTypeIt), "fail", NumAttempts(2)), "B", "C", HavePassed(),
),
Entry("ContinueOnFailure - when a middle It fails and we have flake attempts", false, func() {
BeforeEach(rt.T("BE", DC("DC-BE")), OncePerOrdered)
AfterEach(rt.T("AE", DC("DC-AE")), OncePerOrdered)
Context("container", Ordered, ContinueOnFailure, FlakeAttempts(2), func() {
BeforeAll(rt.T("BA", DC("DC-BA")))
AfterAll(rt.T("AA", DC("DC-AA")))
It("A", rt.T("A"))
It("B", rt.T("B", func() { F("fail") }))
It("C", rt.T("C"))
})
}, []string{
"BE", "BA", "A", "B", "B", "C", "AA", "AE", "DC-AE", "DC-BE", "DC-AA", "DC-BA",
},
"A", HavePassed(), "B", HaveFailed(types.FailureNodeIsLeafNode, FailureNodeType(types.NodeTypeIt), "fail", NumAttempts(2)), "C", HavePassed(),
),
Entry("ContinueOnFailure - when the last It fails and we have flake attempts", false, func() {
BeforeEach(rt.T("BE", DC("DC-BE")), OncePerOrdered)
AfterEach(rt.T("AE", DC("DC-AE")), OncePerOrdered)
Context("container", Ordered, ContinueOnFailure, FlakeAttempts(2), func() {
BeforeAll(rt.T("BA", DC("DC-BA")))
AfterAll(rt.T("AA", DC("DC-AA")))
It("A", rt.T("A"))
It("B", rt.T("B"))
It("C", rt.T("C", func() { F("fail") }))
})
}, []string{
"BE", "BA", "A", "B", "C", "C", "AA", "AE", "DC-AE", "DC-BE", "DC-AA", "DC-BA",
},
"A", "B", HavePassed(), "C", HaveFailed(types.FailureNodeIsLeafNode, FailureNodeType(types.NodeTypeIt), "fail", NumAttempts(2)),
),
Entry("ContinueOnFailure - when a repeating setup node fails", false, func() {
BeforeEach(rt.T("BE", DC("DC-BE")), OncePerOrdered)
AfterEach(rt.T("AE", DC("DC-AE")), OncePerOrdered)
Context("container", Ordered, ContinueOnFailure, func() {
BeforeAll(rt.T("BA", DC("DC-BA")))
AfterAll(rt.T("AA", DC("DC-AA")))
BeforeEach(rt.T("BE-inner", func() { F("fail") }))
It("A", rt.T("A"))
It("B", rt.T("B"))
It("C", rt.T("C"))
})
}, []string{
"BE", "BA", "BE-inner", "BE-inner", "BE-inner", "AA", "AE", "DC-AE", "DC-BE", "DC-AA", "DC-BA",
},
"A", "B", "C", HaveFailed(types.FailureNodeInContainer, FailureNodeType(types.NodeTypeBeforeEach), "fail"),
),

Entry("ContinueOnFailure - when a BeforeAllFails", false, func() {
BeforeEach(rt.T("BE", DC("DC-BE")), OncePerOrdered)
AfterEach(rt.T("AE", DC("DC-AE")), OncePerOrdered)
Context("container", Ordered, ContinueOnFailure, func() {
BeforeAll(rt.T("BA", func() {
DeferCleanup(rt.T("DC-BA"))
F("fail")
}))
AfterAll(rt.T("AA", DC("DC-AA")))
It("A", rt.T("A"))
It("B", rt.T("B"))
It("C", rt.T("C"))
})
}, []string{
"BE", "BA", "AA", "AE", "DC-AE", "DC-BE", "DC-AA", "DC-BA",
},
"A", HaveFailed(types.FailureNodeInContainer, FailureNodeType(types.NodeTypeBeforeAll), "fail"),
"B", "C", HaveBeenSkippedWithMessage(SKIP_DUE_TO_FAILURE_IN_BEFORE_ALL),
),

Entry("ContinueOnFailure - when a BeforeAllFails and flakey attempts are allowed", false, func() {
BeforeEach(rt.T("BE", DC("DC-BE")), OncePerOrdered)
AfterEach(rt.T("AE", DC("DC-AE")), OncePerOrdered)
Context("container", Ordered, FlakeAttempts(2), ContinueOnFailure, func() {
BeforeAll(rt.T("BA", func() {
DeferCleanup(rt.T("DC-BA"))
F("fail")
}))
AfterAll(rt.T("AA", DC("DC-AA")))
It("A", rt.T("A"))
It("B", rt.T("B"))
It("C", rt.T("C"))
})
}, []string{
"BE", "BA", "AA", "DC-AA", "DC-BA", "BA", "AA", "AE", "DC-AE", "DC-BE", "DC-AA", "DC-BA",
},
"A", HaveFailed(types.FailureNodeInContainer, FailureNodeType(types.NodeTypeBeforeAll), "fail"),
"B", "C", HaveBeenSkippedWithMessage(SKIP_DUE_TO_FAILURE_IN_BEFORE_ALL),
),

Entry("ContinueOnFailure - when a BeforeAll fails at first, but then succeeds and flakes are allowed", true, func() {
BeforeEach(rt.T("BE", DC("DC-BE")), OncePerOrdered)
AfterEach(rt.T("AE", DC("DC-AE")), OncePerOrdered)
Context("container", Ordered, ContinueOnFailure, FlakeAttempts(3), func() {
BeforeAll(rt.T("BA", FlakeyFailer(2)))
AfterAll(rt.T("AA", DC("DC-AA")))
It("A", rt.T("A"))
It("B", rt.T("B"))
It("C", rt.T("C"))
})
}, []string{
"BE", "BA", "AA", "DC-AA", "BA", "AA", "DC-AA", "BA", "A", "B", "C", "AA", "AE", "DC-AE", "DC-BE", "DC-AA",
},
"A", "B", "C", HavePassed(),
),

Entry("ContinueOnFailure - when a BeforeAllFails and flakey attempts are allowed", false, func() {
BeforeEach(rt.T("BE", func() {
DeferCleanup(rt.T("DC-BE"))
F("fail")
}), OncePerOrdered)
AfterEach(rt.T("AE", DC("DC-AE")), OncePerOrdered)
Context("container", Ordered, FlakeAttempts(2), ContinueOnFailure, func() {
BeforeAll(rt.T("BA", DC("DC-BA")))
AfterAll(rt.T("AA", DC("DC-AA")))
It("A", rt.T("A"))
It("B", rt.T("B"))
It("C", rt.T("C"))
})
}, []string{
"BE", "AE", "DC-AE", "DC-BE", "BE", "AE", "DC-AE", "DC-BE",
},
"A", HaveFailed(types.FailureNodeAtTopLevel, FailureNodeType(types.NodeTypeBeforeEach), "fail"),
"B", "C", HaveBeenSkippedWithMessage(SKIP_DUE_TO_FAILURE_IN_BEFORE_ALL),
),

//All together now!
Entry("Exhaustive example for setup nodes that run once per ordered container", true, func() {
40 changes: 27 additions & 13 deletions internal/node.go
Original file line number Diff line number Diff line change
@@ -47,19 +47,20 @@ type Node struct {
ReportEachBody func(types.SpecReport)
ReportSuiteBody func(types.Report)

MarkedFocus bool
MarkedPending bool
MarkedSerial bool
MarkedOrdered bool
MarkedOncePerOrdered bool
FlakeAttempts int
MustPassRepeatedly int
Labels Labels
PollProgressAfter time.Duration
PollProgressInterval time.Duration
NodeTimeout time.Duration
SpecTimeout time.Duration
GracePeriod time.Duration
MarkedFocus bool
MarkedPending bool
MarkedSerial bool
MarkedOrdered bool
MarkedContinueOnFailure bool
MarkedOncePerOrdered bool
FlakeAttempts int
MustPassRepeatedly int
Labels Labels
PollProgressAfter time.Duration
PollProgressInterval time.Duration
NodeTimeout time.Duration
SpecTimeout time.Duration
GracePeriod time.Duration

NodeIDWhereCleanupWasGenerated uint
}
@@ -69,13 +70,15 @@ type focusType bool
type pendingType bool
type serialType bool
type orderedType bool
type continueOnFailureType bool
type honorsOrderedType bool
type suppressProgressReporting bool

const Focus = focusType(true)
const Pending = pendingType(true)
const Serial = serialType(true)
const Ordered = orderedType(true)
const ContinueOnFailure = continueOnFailureType(true)
const OncePerOrdered = honorsOrderedType(true)
const SuppressProgressReporting = suppressProgressReporting(true)

@@ -133,6 +136,8 @@ func isDecoration(arg interface{}) bool {
return true
case t == reflect.TypeOf(Ordered):
return true
case t == reflect.TypeOf(ContinueOnFailure):
return true
case t == reflect.TypeOf(OncePerOrdered):
return true
case t == reflect.TypeOf(SuppressProgressReporting):
@@ -241,6 +246,11 @@ func NewNode(deprecationTracker *types.DeprecationTracker, nodeType types.NodeTy
if !nodeType.Is(types.NodeTypeContainer) {
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "Ordered"))
}
case t == reflect.TypeOf(ContinueOnFailure):
node.MarkedContinueOnFailure = bool(arg.(continueOnFailureType))
if !nodeType.Is(types.NodeTypeContainer) {
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "ContinueOnFailure"))
}
case t == reflect.TypeOf(OncePerOrdered):
node.MarkedOncePerOrdered = bool(arg.(honorsOrderedType))
if !nodeType.Is(types.NodeTypeBeforeEach | types.NodeTypeJustBeforeEach | types.NodeTypeAfterEach | types.NodeTypeJustAfterEach) {
@@ -386,6 +396,10 @@ func NewNode(deprecationTracker *types.DeprecationTracker, nodeType types.NodeTy
appendError(types.GinkgoErrors.InvalidDeclarationOfFocusedAndPending(node.CodeLocation, nodeType))
}

if node.MarkedContinueOnFailure && !node.MarkedOrdered {
appendError(types.GinkgoErrors.InvalidContinueOnFailureDecoration(node.CodeLocation))
}

hasContext := node.HasContext || node.SynchronizedAfterSuiteProc1BodyHasContext || node.SynchronizedAfterSuiteAllProcsBodyHasContext || node.SynchronizedBeforeSuiteProc1BodyHasContext || node.SynchronizedBeforeSuiteAllProcsBodyHasContext

if !hasContext && (node.NodeTimeout > 0 || node.SpecTimeout > 0 || node.GracePeriod > 0) && len(errors) == 0 {
27 changes: 27 additions & 0 deletions internal/node_test.go
Original file line number Diff line number Diff line change
@@ -36,6 +36,7 @@ var _ = Describe("Partitioning Decorations", func() {
Pending,
Serial,
Ordered,
ContinueOnFailure,
SuppressProgressReporting,
NodeTimeout(time.Second),
GracePeriod(time.Second),
@@ -63,6 +64,7 @@ var _ = Describe("Partitioning Decorations", func() {
Pending,
Serial,
Ordered,
ContinueOnFailure,
SuppressProgressReporting,
NodeTimeout(time.Second),
GracePeriod(time.Second),
@@ -306,6 +308,31 @@ var _ = Describe("Constructing nodes", func() {
})
})

Describe("the ContinueOnFailure decoration", func() {
It("the node is not MarkedContinueOnFailure by default", func() {
node, errors := internal.NewNode(dt, ntCon, "", body)
Ω(node.MarkedContinueOnFailure).Should(BeFalse())
ExpectAllWell(errors)
})
It("marks the node as ContinueOnFailure", func() {
node, errors := internal.NewNode(dt, ntCon, "", body, Ordered, ContinueOnFailure)
Ω(node.MarkedContinueOnFailure).Should(BeTrue())
ExpectAllWell(errors)
})
It("does not allow non-container nodes to be marked", func() {
node, errors := internal.NewNode(dt, ntBef, "", body, cl, ContinueOnFailure)
Ω(node).Should(BeZero())
Ω(errors).Should(ContainElement(types.GinkgoErrors.InvalidDecoratorForNodeType(cl, ntBef, "ContinueOnFailure")))
Ω(dt.DidTrackDeprecations()).Should(BeFalse())
})
It("does not allow non-Ordered container nodes to be marked", func() {
node, errors := internal.NewNode(dt, ntCon, "", body, cl, ContinueOnFailure)
Ω(node).Should(BeZero())
Ω(errors).Should(ConsistOf(types.GinkgoErrors.InvalidContinueOnFailureDecoration(cl)))
Ω(dt.DidTrackDeprecations()).Should(BeFalse())
})
})

Describe("the OncePerOrdered decoration", func() {
It("applies to setup nodes, only", func() {
for _, nt := range []types.NodeType{ntBef, ntAf, ntJusAf, ntJusBef} {
7 changes: 7 additions & 0 deletions internal/suite.go
Original file line number Diff line number Diff line change
@@ -151,6 +151,13 @@ func (suite *Suite) PushNode(node Node) error {
}
}

if node.MarkedContinueOnFailure {
firstOrderedNode := suite.tree.AncestorNodeChain().FirstNodeMarkedOrdered()
if !firstOrderedNode.IsZero() {
return types.GinkgoErrors.InvalidContinueOnFailureDecoration(node.CodeLocation)
}
}

if node.NodeType == types.NodeTypeContainer {
// During PhaseBuildTopLevel we only track the top level containers without entering them
// We only enter the top level container nodes during PhaseBuildTree
33 changes: 33 additions & 0 deletions internal/suite_test.go
Original file line number Diff line number Diff line change
@@ -238,6 +238,39 @@ var _ = Describe("Suite", func() {
})
})

Context("when pushing a ContinueOnFailure Ordered container", func() {
Context("that is the top-level Ordered container", func() {
It("succeeds", func() {
var errors = make([]error, 3)
errors[0] = suite.PushNode(N(ntCon, "top-level-container", func() {
errors[1] = suite.PushNode(N(ntCon, "ordered-container", Ordered, ContinueOnFailure, func() {
errors[2] = suite.PushNode(N(types.NodeTypeIt, "spec", func() {}))
}))
}))
Ω(errors[0]).ShouldNot(HaveOccurred())
Ω(suite.BuildTree()).Should(Succeed())
Ω(errors[1]).ShouldNot(HaveOccurred())
Ω(errors[2]).ShouldNot(HaveOccurred())
})
})

Context("that is nested in another Ordered container", func() {
It("errors", func() {
var errors = make([]error, 3)

errors[0] = suite.PushNode(N(ntCon, "top-level-container", Ordered, func() {
errors[1] = suite.PushNode(N(ntCon, "ordered-container", cl, Ordered, ContinueOnFailure, func() {
errors[2] = suite.PushNode(N(types.NodeTypeIt, "spec", func() {}))
}))
}))
Ω(errors[0]).ShouldNot(HaveOccurred())
Ω(suite.BuildTree()).Should(Succeed())
Ω(errors[1]).Should(MatchError(types.GinkgoErrors.InvalidContinueOnFailureDecoration(cl)))
Ω(errors[2]).ShouldNot(HaveOccurred())
})
})
})

Context("when pushing a suite node during PhaseBuildTree", func() {
It("errors", func() {
var pushSuiteNodeErr error
9 changes: 9 additions & 0 deletions types/errors.go
Original file line number Diff line number Diff line change
@@ -298,6 +298,15 @@ func (g ginkgoErrors) SetupNodeNotInOrderedContainer(cl CodeLocation, nodeType N
}
}

func (g ginkgoErrors) InvalidContinueOnFailureDecoration(cl CodeLocation) error {
return GinkgoError{
Heading: "ContinueOnFailure not decorating an outermost Ordered Container",
Message: "ContinueOnFailure can only decorate an Ordered container, and this Ordered container must be the outermost Ordered container.",
CodeLocation: cl,
DocLink: "ordered-containers",
}
}

/* DeferCleanup errors */
func (g ginkgoErrors) DeferCleanupInvalidFunction(cl CodeLocation) error {
return GinkgoError{
3 changes: 3 additions & 0 deletions types/types.go
Original file line number Diff line number Diff line change
@@ -604,6 +604,9 @@ var ssEnumSupport = NewEnumSupport(map[uint]string{
func (ss SpecState) String() string {
return ssEnumSupport.String(uint(ss))
}
func (ss SpecState) GomegaString() string {
return ssEnumSupport.String(uint(ss))
}
func (ss *SpecState) UnmarshalJSON(b []byte) error {
out, err := ssEnumSupport.UnmarshJSON(b)
*ss = SpecState(out)

0 comments on commit e0123ca

Please sign in to comment.