Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: more robust Reusable containers experience #2768

Draft
wants to merge 34 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
4902c76
feat: calculate hash of a container request
mdelapenya Aug 28, 2024
10daab5
chore: run mod tidy
mdelapenya Aug 29, 2024
ed3e1fa
feat: calculate the hash of a container request, considering the copi…
mdelapenya Aug 30, 2024
442dd4a
chore: add labels for the container hashes
mdelapenya Aug 30, 2024
969fe59
feat: reuse containers based on hashes
mdelapenya Aug 30, 2024
c2f5e83
docs: document reuse
mdelapenya Aug 30, 2024
dcbb467
chore: add more tests for not hashing dir contents
mdelapenya Sep 2, 2024
c07b5dc
fix: wait for the container to be created
mdelapenya Sep 2, 2024
d1d40ce
chore: simplify deprecation
mdelapenya Sep 2, 2024
7aa26da
docs: improve docs for reuse
mdelapenya Sep 3, 2024
35da30a
feat: do not remove the container with Ryuk
mdelapenya Sep 3, 2024
0ffec34
docs: enrich reuse docs
mdelapenya Sep 4, 2024
7a505ba
feat: support modules to be reused
mdelapenya Sep 4, 2024
a544bc0
docs: refinement
mdelapenya Sep 4, 2024
c9356ed
chore: adjust tests
mdelapenya Sep 4, 2024
1f66181
chore: adapt test to flakiness
mdelapenya Sep 4, 2024
31e54b9
chore: calculate hash at the right time
mdelapenya Sep 4, 2024
44b1b73
chore: improve the subprocess tests
mdelapenya Sep 4, 2024
781ade7
chore: make lint
mdelapenya Sep 4, 2024
e975ae9
chore: no need to ignore Reuse
mdelapenya Sep 4, 2024
dd91901
docs: warn about flakiness in concurrent executions
mdelapenya Sep 4, 2024
799ee15
docs: do not use postgres in funcitonal options
mdelapenya Sep 5, 2024
227fde5
Merge branch 'main' into reuse-struct-hash
mdelapenya Jan 8, 2025
c3c74c3
chore: wrap error
mdelapenya Jan 8, 2025
6d5e655
chore: combine if
mdelapenya Jan 8, 2025
702b457
chor: return notFound errors
mdelapenya Jan 8, 2025
b7b3a4f
chore: simplify test
mdelapenya Jan 8, 2025
8323081
chore: remove not needed if
mdelapenya Jan 8, 2025
98fdeb1
chore: use faster strconv.FormatUint
mdelapenya Jan 8, 2025
1722fba
fix: use streaming to avoid copying the entire file in memory
mdelapenya Jan 8, 2025
4c13c2b
chore: handle errors in the hash function
mdelapenya Jan 10, 2025
cecb112
chore: avoid collitions when building the hash of the files
mdelapenya Jan 10, 2025
d4fc593
chore: delete Reap label, not SessionID
mdelapenya Jan 10, 2025
a1a51a4
fix: use faster strconv.FormatUint
mdelapenya Jan 10, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 6 additions & 5 deletions container.go
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ type FromDockerfile struct {
// BuildOptionsModifier Modifier for the build options before image build. Use it for
// advanced configurations while building the image. Please consider that the modifier
// is called after the default build options are set.
BuildOptionsModifier func(*types.ImageBuildOptions)
BuildOptionsModifier func(*types.ImageBuildOptions) `hash:"ignore"`
}

type ContainerFile struct {
Expand Down Expand Up @@ -141,6 +141,7 @@ type ContainerRequest struct {
RegistryCred string // Deprecated: Testcontainers will detect registry credentials automatically
WaitingFor wait.Strategy
Name string // for specifying container name
Reuse bool // For reusing an existing container
Hostname string
WorkingDir string // specify the working directory of the container
ExtraHosts []string // Deprecated: Use HostConfigModifier instead
Expand All @@ -161,10 +162,10 @@ type ContainerRequest struct {
ShmSize int64 // Amount of memory shared with the host (in bytes)
CapAdd []string // Deprecated: Use HostConfigModifier instead. Add Linux capabilities
CapDrop []string // Deprecated: Use HostConfigModifier instead. Drop Linux capabilities
ConfigModifier func(*container.Config) // Modifier for the config before container creation
HostConfigModifier func(*container.HostConfig) // Modifier for the host config before container creation
EnpointSettingsModifier func(map[string]*network.EndpointSettings) // Modifier for the network settings before container creation
LifecycleHooks []ContainerLifecycleHooks // define hooks to be executed during container lifecycle
ConfigModifier func(*container.Config) `hash:"ignore"` // Modifier for the config before container creation
HostConfigModifier func(*container.HostConfig) `hash:"ignore"` // Modifier for the host config before container creation
EnpointSettingsModifier func(map[string]*network.EndpointSettings) `hash:"ignore"` // Modifier for the network settings before container creation
LifecycleHooks []ContainerLifecycleHooks `hash:"ignore"` // define hooks to be executed during container lifecycle
LogConsumerCfg *LogConsumerConfig // define the configuration for the log producer and its log consumers to follow the logs
}

Expand Down
141 changes: 113 additions & 28 deletions docker.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ import (
"os"
"path/filepath"
"regexp"
"strconv"
"sync"
"time"

Expand Down Expand Up @@ -1205,29 +1206,82 @@ func (p *DockerProvider) CreateContainer(ctx context.Context, req ContainerReque
combineContainerHooks(defaultHooks, origLifecycleHooks),
}

err = req.creatingHook(ctx)
if err != nil {
return nil, err
var resp container.CreateResponse
if req.Reuse {
// we must protect the reusability of the container in the case it's invoked
// in a parallel execution, via ParallelContainers or t.Parallel()
reuseContainerMx.Lock()
defer reuseContainerMx.Unlock()
Comment on lines +1213 to +1214
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: This will lock for the duration of rest of the call, is that the intention?


// Remove the Reap label from the request, as we don't want Ryuk to control
// the container lifecycle in the case of reusing containers.
delete(req.Labels, core.LabelReap)

// calculate the hash, and add the labels, just before creating the container
hash, err := req.hash()
if err != nil {
return nil, fmt.Errorf("hash container request: %w", err)
}

req.Labels[core.LabelContainerHash] = strconv.FormatUint(hash.Hash, 10)
req.Labels[core.LabelCopiedFilesHash] = strconv.FormatUint(hash.FilesHash, 10)

// in the case different test programs are creating a container with the same hash,
// we must check if the container is already created. For that we wait up to 5 seconds
// for the container to be created. If the error means the container is not found, we
// can proceed with the creation of the container.
// This is needed because we need to synchronize the creation of the container across
// different test programs.
c, err := p.waitContainerCreationInTimeout(ctx, hash, 5*time.Second)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see this as a potential configuration:

  • a property: testcontainers.reuse.search.timeout, and
  • an env var: TESTCONTAINERS_REUSE_SEARCH_TIMEOUT

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the PR description says:

We expect this change helps the community, but at the same time warn about its usage in parallel executions, as it could be the case that two concurrent test sessions get to the container creation at the same time, which could lead to the creation of two containers with the same request.

wouldn't we want to accept this limitation and allow for the race condition with the current implementation?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed. Added that comment as an idea for a potential follow-up

if err != nil && !errdefs.IsNotFound(err) {
// another error occurred different from not found, so we return the error
return nil, fmt.Errorf("wait container creation: %w", err)
}

// Create a new container if the request is to reuse the container, but there is no container found by hash
if c != nil {
resp.ID = c.ID

// replace the logging messages for reused containers:
// we know the first lifecycle hook is the logger hook,
// so it's safe to replace its first message for reused containers.
Comment on lines +1245 to +1247
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: Is that always valid, could a user not have customised this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are default hooks and user defined hooks

req.LifecycleHooks[0].PreCreates[0] = func(ctx context.Context, req ContainerRequest) error {
Logger.Printf("🔥 Reusing container: %s", resp.ID[:12])
return nil
}
req.LifecycleHooks[0].PostCreates[0] = func(ctx context.Context, c Container) error {
Logger.Printf("🔥 Container reused: %s", resp.ID[:12])
return nil
}
}
}

resp, err := p.client.ContainerCreate(ctx, dockerInput, hostConfig, networkingConfig, platform, req.Name)
if err != nil {
return nil, fmt.Errorf("container create: %w", err)
}

// #248: If there is more than one network specified in the request attach newly created container to them one by one
if len(req.Networks) > 1 {
for _, n := range req.Networks[1:] {
nw, err := p.GetNetwork(ctx, NetworkRequest{
Name: n,
})
if err == nil {
endpointSetting := network.EndpointSettings{
Aliases: req.NetworkAliases[n],
}
err = p.client.NetworkConnect(ctx, nw.ID, resp.ID, &endpointSetting)
if err != nil {
return nil, fmt.Errorf("network connect: %w", err)
// If the container was not found by hash, create a new one
if resp.ID == "" {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: this function is getting too large, we should look to split it out into more consumable pieces.

err = req.creatingHook(ctx)
if err != nil {
return nil, err
}

resp, err = p.client.ContainerCreate(ctx, dockerInput, hostConfig, networkingConfig, platform, req.Name)
if err != nil {
return nil, fmt.Errorf("container create: %w", err)
}

// #248: If there is more than one network specified in the request attach newly created container to them one by one
if len(req.Networks) > 1 {
for _, n := range req.Networks[1:] {
nw, err := p.GetNetwork(ctx, NetworkRequest{
Name: n,
})
if err == nil {
endpointSetting := network.EndpointSettings{
Aliases: req.NetworkAliases[n],
}
err = p.client.NetworkConnect(ctx, nw.ID, resp.ID, &endpointSetting)
if err != nil {
return nil, fmt.Errorf("network connect: %w", err)
}
}
}
}
Expand Down Expand Up @@ -1280,13 +1334,38 @@ func (p *DockerProvider) findContainerByName(ctx context.Context, name string) (
if len(containers) > 0 {
return &containers[0], nil
}
return nil, nil
return nil, errdefs.NotFound(fmt.Errorf("container not found by name: %s", name))
}

func (p *DockerProvider) findContainerByHash(ctx context.Context, ch containerHash) (*types.Container, error) {
filter := filters.NewArgs(
filters.Arg("label", fmt.Sprintf("%s=%d", core.LabelContainerHash, ch.Hash)),
filters.Arg("label", fmt.Sprintf("%s=%d", core.LabelCopiedFilesHash, ch.FilesHash)),
)

containers, err := p.client.ContainerList(ctx, container.ListOptions{Filters: filter})
if err != nil {
return nil, err
}
defer p.Close()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: It seems odd to close the client here, could you clarify why that's needed?


if len(containers) > 0 {
return &containers[0], nil
}
return nil, errdefs.NotFound(fmt.Errorf("container not found by hash: %s", ch.String()))
}

func (p *DockerProvider) waitContainerCreation(ctx context.Context, name string) (*types.Container, error) {
func (p *DockerProvider) waitContainerCreation(ctx context.Context, hash containerHash) (*types.Container, error) {
return p.waitContainerCreationInTimeout(ctx, hash, 5*time.Second)
}

func (p *DockerProvider) waitContainerCreationInTimeout(ctx context.Context, hash containerHash, timeout time.Duration) (*types.Container, error) {
exp := backoff.NewExponentialBackOff()
exp.MaxElapsedTime = timeout

return backoff.RetryNotifyWithData(
func() (*types.Container, error) {
c, err := p.findContainerByName(ctx, name)
c, err := p.findContainerByHash(ctx, hash)
if err != nil {
if !errdefs.IsNotFound(err) && isPermanentClientError(err) {
return nil, backoff.Permanent(err)
Expand All @@ -1295,11 +1374,11 @@ func (p *DockerProvider) waitContainerCreation(ctx context.Context, name string)
}

if c == nil {
return nil, errdefs.NotFound(fmt.Errorf("container %s not found", name))
return nil, errdefs.NotFound(fmt.Errorf("container %v not found", hash))
}
return c, nil
},
backoff.WithContext(backoff.NewExponentialBackOff(), ctx),
backoff.WithContext(exp, ctx),
func(err error, duration time.Duration) {
if errdefs.IsNotFound(err) {
return
Expand All @@ -1309,8 +1388,14 @@ func (p *DockerProvider) waitContainerCreation(ctx context.Context, name string)
)
}

// Deprecated: it will be removed in the next major release.
func (p *DockerProvider) ReuseOrCreateContainer(ctx context.Context, req ContainerRequest) (con Container, err error) {
c, err := p.findContainerByName(ctx, req.Name)
hash, err := req.hash()
if err != nil {
return nil, fmt.Errorf("hash container request: %w", err)
}

c, err := p.findContainerByHash(ctx, hash)
if err != nil {
return nil, err
}
Expand All @@ -1322,7 +1407,7 @@ func (p *DockerProvider) ReuseOrCreateContainer(ctx context.Context, req Contain
if !createContainerFailDueToNameConflictRegex.MatchString(err.Error()) {
return nil, err
}
c, err = p.waitContainerCreation(ctx, req.Name)
c, err = p.waitContainerCreation(ctx, hash)
if err != nil {
return nil, err
}
Expand Down
2 changes: 1 addition & 1 deletion docker_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -2142,7 +2142,7 @@ func TestDockerProvider_waitContainerCreation_retries(t *testing.T) {
// give a chance to retry
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
_, _ = p.waitContainerCreation(ctx, "someID")
_, _ = p.waitContainerCreation(ctx, containerHash{})

assert.Positive(t, m.containerListCount)
assert.Equal(t, tt.shouldRetry, m.containerListCount > 1)
Expand Down
20 changes: 16 additions & 4 deletions docs/features/common_functional_options.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ postgres, err = postgresModule.Run(ctx, "postgres:15-alpine", testcontainers.Wit
If you need to access a port that is already running in the host, you can use `testcontainers.WithHostPortAccess` for example:

```golang
postgres, err = postgresModule.Run(ctx, "postgres:15-alpine", testcontainers.WithHostPortAccess(8080))
ctr, err = tcmodule.Run(ctx, "your-image:your-tag", testcontainers.WithHostPortAccess(8080))
```

To understand more about this feature, please read the [Exposing host ports to the container](/features/networking/#exposing-host-ports-to-the-container) documentation.
Expand Down Expand Up @@ -70,7 +70,7 @@ useful context instead of appearing out of band.
```golang
func TestHandler(t *testing.T) {
logger := TestLogger(t)
ctr, err := postgresModule.Run(ctx, "postgres:15-alpine", testcontainers.WithLogger(logger))
ctr, err := yourModule.Run(ctx, "your-image:your-tag", testcontainers.WithLogger(logger))
CleanupContainer(t, ctr)
require.NoError(t, err)
// Do something with container.
Expand Down Expand Up @@ -136,6 +136,18 @@ If you want to attach your containers to a throw-away network, you can use the `

In the case you need to retrieve the network name, you can use the `Networks(ctx)` method of the `Container` interface, right after it's running, which returns a slice of strings with the names of the networks where the container is attached.

#### WithReuse

- Not available until the next release of testcontainers-go <a href="https://github.com/testcontainers/testcontainers-go"><span class="tc-version">:material-tag: main</span></a>

If you want to reuse a container across different test executions, you can use `testcontainers.WithReuse` option. This option will keep the container running after the test execution, so it can be reused by any other test sharing the same `ContainerRequest`. As a result, the container is not terminated by Ryuk.

```golang
ctr, err = tcmodule.Run(ctx, "your-image:your-tag", testcontainers.WithReuse())
```

Please read the [Reuse containers](/features/creating_container#reusable-container) documentation for more information.

#### Docker type modifiers

If you need an advanced configuration for the container, you can leverage the following Docker type modifiers:
Expand All @@ -144,14 +156,14 @@ If you need an advanced configuration for the container, you can leverage the fo
- `testcontainers.WithHostConfigModifier`
- `testcontainers.WithEndpointSettingsModifier`

Please read the [Create containers: Advanced Settings](/features/creating_container.md#advanced-settings) documentation for more information.
Please read the [Create containers: Advanced Settings](/features/creating_container#advanced-settings) documentation for more information.

#### Customising the ContainerRequest

This option will merge the customized request into the module's own `ContainerRequest`.

```go
container, err := Run(ctx, "postgres:13-alpine",
ctr, err := Run(ctx, "your-image:your-tag",
/* Other module options */
testcontainers.CustomizeRequest(testcontainers.GenericContainerRequest{
ContainerRequest: testcontainers.ContainerRequest{
Expand Down
Loading
Loading