Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minsev: Refactor unit tests #5869

Closed
wants to merge 2 commits into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
57 changes: 22 additions & 35 deletions processors/minsev/minsev_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -56,81 +56,68 @@ func (p *processor) ForceFlush(ctx context.Context) error {
return p.ReturnErr
}

func (p *processor) Reset() {
p.OnEmitCalls = p.OnEmitCalls[:0]
p.EnabledCalls = p.EnabledCalls[:0]
p.ShutdownCalls = p.ShutdownCalls[:0]
p.ForceFlushCalls = p.ForceFlushCalls[:0]
}

func TestLogProcessorOnEmit(t *testing.T) {
t.Run("Passthrough", func(t *testing.T) {
wrapped := &processor{ReturnErr: assert.AnError}

p := NewLogProcessor(wrapped, api.SeverityTrace1)
ctx := context.Background()
r := &log.Record{}
for _, sev := range severities {
wrapped := &processor{ReturnErr: assert.AnError}
p := NewLogProcessor(wrapped, api.SeverityTrace1)
ctx := context.Background()
r := &log.Record{}
Comment on lines +62 to +65
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this allocating every iteration instead of once?

Copy link
Member Author

@pellared pellared Jul 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it a problem? It is just a unit test (not even integration test). It is not a benchmark and it does not make the test execution noticeable longer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it a problem? It is just a unit test (not even integration test). It is not a benchmark and it does not make the test execution noticeable longer.

How does this line of reasoning not apply to changes submitted in this PR in general? This seems to be a subjective change with functionally equivalent code being produced. Am I missing why this shouldn't be evaluated on these merits?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It reduces the cyclomatic complexity, the number lines of code, API of test double. I find this simpler, more readable and maintainable than requiring future developers to think when the processor needs to be reset. I do not find that simplifying test code is subjective.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It reduces the cyclomatic complexity, the number lines of code, API of test double. I find this simpler, more readable and maintainable than requiring future developers to think when the processor needs to be reset. I do not find that simplifying test code is subjective.

Right, and I am pointing out that your version, that you point out will be copied, has memory issues that should not be introduced.

I'm still not following how your subjective feelings of simplicity are more valid to the criticism of "Is it a problem? It is just a unit test (not even integration test). It is not a benchmark and it does not make the test execution noticeable longer."

Are these changes just subjective feel? If so I think performance evaluations like the one provided in this comment thread need to be evaluated.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed Reset method (API) from processor type ( which is a test double)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean benchmark a unit test?

I'm not sure you need a benchmark to see the inefficient memory design of the test.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to have some quote https://wiki.c2.com/?StructuredProgrammingWithGoToStatements

Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

I just think we do need to care about performance in this scenario.

Yet we should not pass up our opportunities in that critical 3%.

I do not think this is this 3% 😉

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, but if you replace "speed" with "code quality" with what you quoted, the same applies to the changes being presented in this PR.

Like I said above, I'm not sure any of this discussion if really justified given the functional test coverage does not change with this PR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

given the functional test coverage does not change with this PR

From https://en.wikipedia.org/wiki/Code_refactoring:

Refactoring is intended to improve the design, structure, and/or implementation of the software (its non-functional attributes), while preserving its functionality. Potential advantages of refactoring may include improved code readability and reduced complexity

Sure, but if you replace "speed" with "code quality" with what you quoted, the same applies to the changes being presented in this PR.

Feel free to close the PR if you are against the change. As the codeowner I think you the right to do it.

I created the PR as part of my work on #5861.


r.SetSeverity(sev)
assert.ErrorIs(t, p.OnEmit(ctx, *r), assert.AnError, sev.String())

if assert.Lenf(t, wrapped.OnEmitCalls, 1, "Record with severity %s not passed-through", sev) {
assert.Equal(t, ctx, wrapped.OnEmitCalls[0].Ctx, sev.String())
assert.Equal(t, *r, wrapped.OnEmitCalls[0].Record, sev.String())
}
wrapped.Reset()
}
})

t.Run("Dropped", func(t *testing.T) {
wrapped := &processor{ReturnErr: assert.AnError}

p := NewLogProcessor(wrapped, api.SeverityFatal4+1)
ctx := context.Background()
r := &log.Record{}
for _, sev := range severities {
wrapped := &processor{ReturnErr: assert.AnError}
p := NewLogProcessor(wrapped, api.SeverityFatal4+1)
ctx := context.Background()
r := &log.Record{}

r.SetSeverity(sev)
assert.NoError(t, p.OnEmit(ctx, *r), assert.AnError, sev.String())

if !assert.Lenf(t, wrapped.OnEmitCalls, 0, "Record with severity %s passed-through", sev) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

@MrAlias MrAlias Jul 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I gotcha. It seems a bit like the cyclomatic complexity is being over applied here, but if that is truly the desire, there is no need for the if statement.

assert.Lenf(t, wrapped.OnEmitCalls, 0, "Record with severity %s passed-through", sev)
wrapped.Reset()

That would avoid if statement you are concerned about without the added memory inefficiencies. There would still be computation efficiencies as the function call will be a no-op in the existing true conditional case.

I'm still not sold that this metric is really serving to better the code with type of change though.

wrapped.Reset()
}
assert.Lenf(t, wrapped.OnEmitCalls, 0, "Record with severity %s passed-through", sev)
}
})
}

func TestLogProcessorEnabled(t *testing.T) {
t.Run("Passthrough", func(t *testing.T) {
wrapped := &processor{}

p := NewLogProcessor(wrapped, api.SeverityTrace1)
ctx := context.Background()
r := &log.Record{}
for _, sev := range severities {
wrapped := &processor{}
p := NewLogProcessor(wrapped, api.SeverityTrace1)
ctx := context.Background()
r := &log.Record{}

r.SetSeverity(sev)
assert.True(t, p.Enabled(ctx, *r), sev.String())

if assert.Lenf(t, wrapped.EnabledCalls, 1, "Record with severity %s not passed-through", sev) {
assert.Equal(t, ctx, wrapped.EnabledCalls[0].Ctx, sev.String())
assert.Equal(t, *r, wrapped.EnabledCalls[0].Record, sev.String())
}
wrapped.Reset()
}
})

t.Run("NotEnabled", func(t *testing.T) {
wrapped := &processor{}

p := NewLogProcessor(wrapped, api.SeverityFatal4+1)
ctx := context.Background()
r := &log.Record{}
for _, sev := range severities {
wrapped := &processor{}
p := NewLogProcessor(wrapped, api.SeverityFatal4+1)
ctx := context.Background()
r := &log.Record{}

r.SetSeverity(sev)
assert.False(t, p.Enabled(ctx, *r), sev.String())

if !assert.Lenf(t, wrapped.EnabledCalls, 0, "Record with severity %s passed-through", sev) {
wrapped.Reset()
}
assert.Lenf(t, wrapped.EnabledCalls, 0, "Record with severity %s passed-through", sev)
}
})
}
Expand Down