Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: can not get logs from container which is dead or marked for removal #606

Open
bgranvea opened this issue Nov 2, 2022 · 4 comments · Fixed by #947
Open

[Bug]: can not get logs from container which is dead or marked for removal #606

bgranvea opened this issue Nov 2, 2022 · 4 comments · Fixed by #947

Comments

@bgranvea
Copy link

bgranvea commented Nov 2, 2022

Testcontainers version

0.15.0

Using the latest Testcontainers version?

Yes

Host OS

Linux

Host Arch

amd64

Go Version

1.19

Docker version

Client: Docker Engine - Community
 Version:           20.10.21
 API version:       1.41
 Go version:        go1.18.7
 Git commit:        baeda1f
 Built:             Tue Oct 25 18:04:24 2022
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.21
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.18.7
  Git commit:       3056208
  Built:            Tue Oct 25 18:02:38 2022
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.9
  GitCommit:        1c90a442489720eec95342e1789ee8a5e1b9536f
 runc:
  Version:          1.1.4
  GitCommit:        v1.1.4-0-g5fd4c4d
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Docker info

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Docker Buildx (Docker Inc., v0.9.1-docker)
  scan: Docker Scan (Docker Inc., v0.21.0)

Server:
 Containers: 1
  Running: 1
  Paused: 0
  Stopped: 0
 Images: 5
 Server Version: 20.10.21
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 1c90a442489720eec95342e1789ee8a5e1b9536f
 runc version: v1.1.4-0-g5fd4c4d
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 3.10.0-1160.71.1.el7.x86_64
 Operating System: CentOS Linux 7 (Core)
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 15.51GiB
 Name: dev-bgr-test1-lnx7.iv.local
 ID: JCOB:B6QD:MFHY:4B4Y:Z7OA:H7LA:Z5NM:TDOA:7DD7:AMMB:ZBSJ:4KIU
 Docker Root Dir: /var/lib/docker
 Debug Mode: true
  File Descriptors: 34
  Goroutines: 41
  System Time: 2022-11-02T17:54:20.539106249+01:00
  EventsListeners: 0

What happened?

I'm starting a container and waiting for the end of its execution (it is the expected behavior). I also have a log consumer to display what is going on.

	c, err := testcontainers.GenericContainer(context.Background(), testcontainers.GenericContainerRequest{
		ContainerRequest: testcontainers.ContainerRequest{
			Image:    image,
			Networks: []string{network},
		},
	})
	if err != nil {
		return err
	}
	defer c.Terminate(context.Background())

	c.FollowOutput(&stdoutLogConsumer{})

	err = c.Start(context.Background())
	if err != nil {
		return err
	}

	err = c.StartLogProducer(context.Background())
	if err != nil {
		return err
	}

	err = wait.ForExit().WithExitTimeout(180*time.Second).WaitUntilReady(context.Background(), c)
	if err != nil {
		return err
	}

	err = c.StopLogProducer()
	if err != nil {
		return err
	}

Sometimes the test fails with this error:

     | panic: Error response from daemon: can not get logs from container which is dead or marked for removal
     | goroutine 51 [running]:
     | github.com/testcontainers/testcontainers-go.(*DockerContainer).StartLogProducer.func1()
     | 	/go/pkg/mod/github.com/testcontainers/testcontainers-go@v0.15.0/docker.go:576 +0x5f6
     | created by github.com/testcontainers/testcontainers-go.(*DockerContainer).StartLogProducer
     | 	/go/pkg/mod/github.com/testcontainers/testcontainers-go@v0.15.0/docker.go:558 +0x8a

I think there is a problem with the way the log producer is handling a container that terminates:

		r, err := c.provider.client.ContainerLogs(ctx, c.GetContainerID(), options)
		if err != nil {
			// if we can't get the logs, panic, we can't return an error to anything
			// from within this goroutine
			panic(err)
		}

I suggest to log a warning and stop log producer instead of calling panic.

Relevant log output

No response

Additional Information

No response

@ThomasObenaus
Copy link

I face the same issue and agree to the suggestion by @bgranvea .

Atm it happens sometimes that when a test case is done and the according container is closed this piece of code
image

detects that the connection to the container was lost and tries to reconnect.
Then during this attempt it notices an error (since the targeted container is not alive any more) and throws a panic.

@ThomasObenaus
Copy link

The bad thing with this behavior is that it makes tests flaky as they sometimes fail without any relevant reason.

@mdelapenya
Copy link
Collaborator

Reopening for #1164

@mdelapenya
Copy link
Collaborator

@bgranvea could you try with the latest release? There were a few improvements in the log consumer code that could have resolved this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment