Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker fails to start containers with cgroup memory allocation error. #841

Open
2 of 3 tasks
JakeBonek opened this issue Oct 29, 2019 · 65 comments
Open
2 of 3 tasks

Comments

@JakeBonek
Copy link

  • This is a bug report
  • This is a feature request
  • I searched existing issues before opening this one

Expected behavior

Docker should successfully start hello-world container.

Actual behavior

After a certain amount of time, docker fails to start any containers on a host with the following error:

[root@REDACTED]# docker run hello-world docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:279: applying cgroup configuration for process caused \"mkdir /sys/fs/cgroup/memory/docker/fe4159ed6f4ec16af63ba0c2af53ec9c6b0c0c2ac42ff96f6816d5e28a821b4e: cannot allocate memory\"": unknown. ERRO[0000] error waiting for container: context canceled

This issue has been fixed in the past by restarting the docker daemon or rebooting the machine although the docker daemon is active and running at the time of running the container. The machine has ample available memory and cpus and should have no problem starting the container.

Steps to reproduce the behavior

Output of docker version:

Client:
 Version:           18.06.1-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        e68fc7a
 Built:             Tue Aug 21 17:23:03 2018
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.1-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       e68fc7a
  Built:            Tue Aug 21 17:25:29 2018
  OS/Arch:          linux/amd64
  Experimental:     false

Output of docker info:

Containers: 39
 Running: 17
 Paused: 0
 Stopped: 22
Images: 39
Server Version: 18.06.1-ce
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-957.1.3.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 56
Total Memory: 503.6GiB
Name: REDACTED
ID: UK7O:GWIS:TFRJ:JDUB:5SS7:GH6W:TA4K:NBQC:7W4V:YLZJ:Q2AV:UBXA
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: bridge-nf-call-ip6tables is disabled

Additional environment details (AWS, VirtualBox, physical, etc.)
At the time of running the container, the host has 500GB of available memory and around 50+free cores.

@JakeBonek JakeBonek changed the title Docker fails to start any containers with cgroup memory allocation error. Docker fails to start containers with cgroup memory allocation error. Oct 29, 2019
@kai-cool-dev
Copy link

Do you have swap active? Try to disable the swap!

@JakeBonek
Copy link
Author

Swap is disabled on the host.

@thaJeztah
Copy link
Member

This could be related to a bug in the RHEL/CentOS kernels where kernel-memory cgroups doesn't work properly; we included a workaround for this in later versions of docker to disable this feature; moby/moby#38145 (backported to Docker 18.09 and up docker-archive/engine#121)

Note that Docker 18.06 reached EOL, and won't be updated with this fix, so I recommend updating to a current version.

I'm closing this issue because of the above, but feel free to continue the conversation

@maiconbaumx
Copy link

Hello.
I'm facing this same problem in my environment and seems quite like a bug, because it ramdonly happens in a cluster with more than 350 containers. Is there a chance that this bug is present on this current versions?

# docker --version
Docker version 19.03.5, build 633a0ea

# docker version
Client: Docker Engine - Community
 Version:           19.03.5
 API version:       1.40
 Go version:        go1.12.12
 Git commit:        633a0ea
 Built:             Wed Nov 13 07:25:41 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.5
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.12
  Git commit:       633a0ea
  Built:            Wed Nov 13 07:24:18 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.10
  GitCommit:        b34a5c8af56e510852c35414db4c1f4fa6172339
 runc:
  Version:          1.0.0-rc8+dev
  GitCommit:        3e425f80a8c931f88e6d94a8c831b9d5aa481657
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

#  containerd --version
containerd  1.2.10 b34a5c8af56e510852c35414db4c1f4fa6172339

#  uname -r
3.10.0-1062.4.3.el7.x86_64

@guruprakashs
Copy link

guruprakashs commented Dec 4, 2019

@thaJeztah

We are also seeing this issue in our cluster.

# docker run -it c7c39515eefe bash
docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:275: applying cgroup configuration for process caused \"mkdir /sys/fs/cgroup/memory/docker/56ca1a748e94176c378682012a8ad1a6cab3b812dfb1f34e9da303d47d8f0e97: cannot allocate memory\"": unknown.

These are the software versions that we are on. Could you please advise?

# docker info
Containers: 29
 Running: 19
 Paused: 0
 Stopped: 10
Images: 184
Server Version: 18.09.3
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: e6b3f5632f50dbc4e9cb6288d911bf4f5e95b18e
runc version: 6635b4f0c6af3810594d2770f662f34ddc15b40d
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-957.1.3.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 503.8GiB
Name: hostname.here
ID: QG35:QFQQ:ZLOZ:BZEC:SKL5:CDJ2:74VV:WFDO:5PCY:MJEN:VMQB:DNA5
Docker Root Dir: /data/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine

# uname -r
3.10.0-957.1.3.el7.x86_64

# containerd --version
containerd github.com/containerd/containerd 1.2.4 e6b3f5632f50dbc4e9cb6288d911bf4f5e95b18e

Thanks

@ntk148v
Copy link

ntk148v commented Dec 5, 2019

@thaJeztah I'm facing the exact same issue in my environment.

# uname -a
Linux monitor49 3.10.0-957.5.1.el7.x86_64 #1 SMP Fri Feb 1 14:54:57 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

# docker info
Containers: 14
 Running: 13
 Paused: 0
 Stopped: 1
Images: 54
Server Version: 18.06.0-ce
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: d64c661f1d51c48782c9cec8fda7604785f93587
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-957.5.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 32
Total Memory: 125.7GiB
Name: monitor49
ID: 5T2R:BZFE:TQD3:LXSE:GUC7:5WNG:O5WY:CLJ2:FT62:J7ZX:EYB2:H67D
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 nexus.5f.cloud:8890
 nexus.5f.cloud:8891
 nexus.cloud:8890
 nexus.cloud:8891
 127.0.0.0/8
Live Restore Enabled: true

# docker-containerd --version
containerd github.com/containerd/containerd v1.1.1 d64c661f1d51c48782c9cec8fda7604785f93587

@jpmenil
Copy link

jpmenil commented Dec 5, 2019

same here, RedHat 7.7.
kernel 3.10.0-1062.4.1.el7.x86_64 with docker version
19.03.5, build 633a0ea
@thaJeztah can you reopen the issue ?

@thaJeztah thaJeztah reopened this Dec 5, 2019
@jpmenil
Copy link

jpmenil commented Dec 5, 2019

This is the continuity of this kernel bug, at least on RH:
https://bugzilla.redhat.com/show_bug.cgi?id=1507149

@petersbattaglia
Copy link

repros on CentOS 7
kernel Linux 3.10.0-1062.4.3.el7.x86_64
Docker version 19.03.5, build 633a0ea

@cccdemon
Copy link

cccdemon commented Dec 12, 2019

Same Issue here

Centos 7
Kernel: Linux linux.hostname.placeholder.it 3.10.0-1062.4.3.el7.x86_64 #1 SMP Wed Nov 13 23:58:53 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Docker version 19.03.5, build 633a0ea

Provisioned via Nomad

Log:

Dec 12 12:00:46 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:46.033619039+01:00" level=error msg="9c9e6096b6b2855934d9a1a06250969d44466145f9a392f86b0515f34630288b cleanup: failed to delete container from containerd: no such container"
Dec 12 12:00:46 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:46.033708452+01:00" level=error msg="Handler for POST /containers/9c9e6096b6b2855934d9a1a06250969d44466145f9a392f86b0515f34630288b/start returned error: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:297: applying cgroup configuration for process caused \\\"mkdir /sys/fs/cgroup/memory/docker/9c9e6096b6b2855934d9a1a06250969d44466145f9a392f86b0515f34630288b: cannot allocate memory\\\"\": unknown"
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 6(veth810fe6d) entered blocking state
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 6(veth810fe6d) entered disabled state
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: device veth810fe6d entered promiscuous mode
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: IPv6: ADDRCONF(NETDEV_UP): veth810fe6d: link is not ready
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 6(veth810fe6d) entered blocking state
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 6(veth810fe6d) entered forwarding state
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 7(vethf942213) entered blocking state
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 7(vethf942213) entered disabled state
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: device vethf942213 entered promiscuous mode
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: IPv6: ADDRCONF(NETDEV_UP): vethf942213: link is not ready
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 7(vethf942213) entered blocking state
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 7(vethf942213) entered forwarding state
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 5(vethd70c60e) entered disabled state
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 6(veth810fe6d) entered disabled state
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 7(vethf942213) entered disabled state
Dec 12 12:00:46 linux.hostname.placeholder.it containerd[1753]: time="2019-12-12T12:00:46.164338118+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/114d4d0d12a56762e6a5b3b3ba5c9490285203f264e1b855c999eead5b9e891b/shim.sock" debug=false pid=106646
Dec 12 12:00:46 linux.hostname.placeholder.it containerd[1753]: time="2019-12-12T12:00:46.165050163+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7ab9f53ec0d561800e6b5b61e98f6be75777f154966a498eb4947d5a73723914/shim.sock" debug=false pid=106647
Dec 12 12:00:46 linux.hostname.placeholder.it containerd[1753]: time="2019-12-12T12:00:46.170620429+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b27ad5a77e4469e1025d4311cf4a735e630c33907209cf31f472e8f909c7caf1/shim.sock" debug=false pid=106666
Dec 12 12:00:46 linux.hostname.placeholder.it containerd[1753]: time="2019-12-12T12:00:46.267713777+01:00" level=info msg="shim reaped" id=b27ad5a77e4469e1025d4311cf4a735e630c33907209cf31f472e8f909c7caf1
Dec 12 12:00:46 linux.hostname.placeholder.it containerd[1753]: time="2019-12-12T12:00:46.275364215+01:00" level=info msg="shim reaped" id=114d4d0d12a56762e6a5b3b3ba5c9490285203f264e1b855c999eead5b9e891b
Dec 12 12:00:46 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:46.277650799+01:00" level=error msg="stream copy error: reading from a closed fifo"
Dec 12 12:00:46 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:46.277696613+01:00" level=error msg="stream copy error: reading from a closed fifo"
Dec 12 12:00:46 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:46.285452523+01:00" level=error msg="stream copy error: reading from a closed fifo"
Dec 12 12:00:46 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:46.285484175+01:00" level=error msg="stream copy error: reading from a closed fifo"
Dec 12 12:00:46 linux.hostname.placeholder.it containerd[1753]: time="2019-12-12T12:00:46.287996609+01:00" level=info msg="shim reaped" id=7ab9f53ec0d561800e6b5b61e98f6be75777f154966a498eb4947d5a73723914
Dec 12 12:00:46 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:46.297959225+01:00" level=error msg="stream copy error: reading from a closed fifo"
Dec 12 12:00:46 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:46.297968748+01:00" level=error msg="stream copy error: reading from a closed fifo"
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 7(vethf942213) entered disabled state
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: device vethf942213 left promiscuous mode
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 7(vethf942213) entered disabled state
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 5(vethd70c60e) entered disabled state
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: device vethd70c60e left promiscuous mode
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 5(vethd70c60e) entered disabled state
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 6(veth810fe6d) entered disabled state
Dec 12 12:00:46 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:46.465478486+01:00" level=warning msg="b27ad5a77e4469e1025d4311cf4a735e630c33907209cf31f472e8f909c7caf1 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/b27ad5a77e4469e1025d4311cf4a735e630c33907209cf31f472e8f909c7caf1/mounts/shm, flags: 0x2: no such file or directory"
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: device veth810fe6d left promiscuous mode
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 6(veth810fe6d) entered disabled state
Dec 12 12:00:46 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:46.473303028+01:00" level=warning msg="114d4d0d12a56762e6a5b3b3ba5c9490285203f264e1b855c999eead5b9e891b cleanup: failed to unmount IPC: umount /var/lib/docker/containers/114d4d0d12a56762e6a5b3b3ba5c9490285203f264e1b855c999eead5b9e891b/mounts/shm, flags: 0x2: no such file or directory"
Dec 12 12:00:46 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:46.521090337+01:00" level=warning msg="7ab9f53ec0d561800e6b5b61e98f6be75777f154966a498eb4947d5a73723914 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7ab9f53ec0d561800e6b5b61e98f6be75777f154966a498eb4947d5a73723914/mounts/shm, flags: 0x2: no such file or directory"
Dec 12 12:00:46 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:46.578620238+01:00" level=error msg="114d4d0d12a56762e6a5b3b3ba5c9490285203f264e1b855c999eead5b9e891b cleanup: failed to delete container from containerd: no such container"
Dec 12 12:00:46 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:46.578710816+01:00" level=error msg="Handler for POST /containers/114d4d0d12a56762e6a5b3b3ba5c9490285203f264e1b855c999eead5b9e891b/start returned error: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:297: applying cgroup configuration for process caused \\\"mkdir /sys/fs/cgroup/memory/docker/114d4d0d12a56762e6a5b3b3ba5c9490285203f264e1b855c999eead5b9e891b: cannot allocate memory\\\"\": unknown"
Dec 12 12:00:46 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:46.581544749+01:00" level=error msg="b27ad5a77e4469e1025d4311cf4a735e630c33907209cf31f472e8f909c7caf1 cleanup: failed to delete container from containerd: no such container"
Dec 12 12:00:46 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:46.581584376+01:00" level=error msg="Handler for POST /containers/b27ad5a77e4469e1025d4311cf4a735e630c33907209cf31f472e8f909c7caf1/start returned error: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:297: applying cgroup configuration for process caused \\\"mkdir /sys/fs/cgroup/memory/docker/b27ad5a77e4469e1025d4311cf4a735e630c33907209cf31f472e8f909c7caf1: cannot allocate memory\\\"\": unknown"
Dec 12 12:00:46 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:46.610861406+01:00" level=error msg="7ab9f53ec0d561800e6b5b61e98f6be75777f154966a498eb4947d5a73723914 cleanup: failed to delete container from containerd: no such container"
Dec 12 12:00:46 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:46.610913300+01:00" level=error msg="Handler for POST /containers/7ab9f53ec0d561800e6b5b61e98f6be75777f154966a498eb4947d5a73723914/start returned error: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:297: applying cgroup configuration for process caused \\\"mkdir /sys/fs/cgroup/memory/docker/7ab9f53ec0d561800e6b5b61e98f6be75777f154966a498eb4947d5a73723914: cannot allocate memory\\\"\": unknown"
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 5(veth83d5462) entered blocking state
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 5(veth83d5462) entered disabled state
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: device veth83d5462 entered promiscuous mode
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: IPv6: ADDRCONF(NETDEV_UP): veth83d5462: link is not ready
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 5(veth83d5462) entered blocking state
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 5(veth83d5462) entered forwarding state
Dec 12 12:00:46 linux.hostname.placeholder.it containerd[1753]: time="2019-12-12T12:00:46.767810035+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/09e1d8749a5d3abd187233dcf6555dbb13e3512d26e9ad53088e1c8c3cc33c22/shim.sock" debug=false pid=106740
Dec 12 12:00:46 linux.hostname.placeholder.it containerd[1753]: time="2019-12-12T12:00:46.897232357+01:00" level=info msg="shim reaped" id=09e1d8749a5d3abd187233dcf6555dbb13e3512d26e9ad53088e1c8c3cc33c22
Dec 12 12:00:46 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:46.908706574+01:00" level=error msg="stream copy error: reading from a closed fifo"
Dec 12 12:00:46 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:46.908878386+01:00" level=error msg="stream copy error: reading from a closed fifo"
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 5(veth83d5462) entered disabled state
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: device veth83d5462 left promiscuous mode
Dec 12 12:00:46 linux.hostname.placeholder.it kernel: docker0: port 5(veth83d5462) entered disabled state
Dec 12 12:00:46 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:46.976899282+01:00" level=warning msg="09e1d8749a5d3abd187233dcf6555dbb13e3512d26e9ad53088e1c8c3cc33c22 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/09e1d8749a5d3abd187233dcf6555dbb13e3512d26e9ad53088e1c8c3cc33c22/mounts/shm, flags: 0x2: no such file or directory"
Dec 12 12:00:47 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:47.058601763+01:00" level=error msg="09e1d8749a5d3abd187233dcf6555dbb13e3512d26e9ad53088e1c8c3cc33c22 cleanup: failed to delete container from containerd: no such container"
Dec 12 12:00:47 linux.hostname.placeholder.it nomad[1733]: 2019-12-12T12:00:47.058+0100 [ERROR] client.driver_mgr.docker: failed to start container: driver=docker container_id=09e1d8749a5d3abd187233dcf6555dbb13e3512d26e9ad53088e1c8c3cc33c22 error="API error (500): OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused \"mkdir /sys/fs/cgroup/memory/docker/09e1d8749a5d3abd187233dcf6555dbb13e3512d26e9ad53088e1c8c3cc33c22: cannot allocate memory\"": unknown"
Dec 12 12:00:47 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:47.058699552+01:00" level=error msg="Handler for POST /containers/09e1d8749a5d3abd187233dcf6555dbb13e3512d26e9ad53088e1c8c3cc33c22/start returned error: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:297: applying cgroup configuration for process caused \\\"mkdir /sys/fs/cgroup/memory/docker/09e1d8749a5d3abd187233dcf6555dbb13e3512d26e9ad53088e1c8c3cc33c22: cannot allocate memory\\\"\": unknown"
Dec 12 12:00:47 linux.hostname.placeholder.it nomad[1733]: 2019-12-12T12:00:47.179+0100 [ERROR] client.alloc_runner.task_runner: running driver failed: alloc_id=22c7c014-c45f-a3ec-1b72-e441f5efb57e task=core-drones-event-handler error="Failed to start container 09e1d8749a5d3abd187233dcf6555dbb13e3512d26e9ad53088e1c8c3cc33c22: API error (500): OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused \"mkdir /sys/fs/cgroup/memory/docker/09e1d8749a5d3abd187233dcf6555dbb13e3512d26e9ad53088e1c8c3cc33c22: cannot allocate memory\"": unknown"
Dec 12 12:00:47 linux.hostname.placeholder.it nomad[1733]: 2019-12-12T12:00:47.179+0100 [INFO ] client.alloc_runner.task_runner: restarting task: alloc_id=22c7c014-c45f-a3ec-1b72-e441f5efb57e task=core-drones-event-handler reason="Restart within policy" delay=17.065586128s
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: docker0: port 5(veth90db994) entered blocking state
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: docker0: port 5(veth90db994) entered disabled state
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: device veth90db994 entered promiscuous mode
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: IPv6: ADDRCONF(NETDEV_UP): veth90db994: link is not ready
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: docker0: port 5(veth90db994) entered blocking state
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: docker0: port 5(veth90db994) entered forwarding state
Dec 12 12:00:47 linux.hostname.placeholder.it containerd[1753]: time="2019-12-12T12:00:47.229192684+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/adb62738eb9315a239aac02d981ca0d5afbb7d66d99a977b6d9db134036df94d/shim.sock" debug=false pid=106774
Dec 12 12:00:47 linux.hostname.placeholder.it containerd[1753]: time="2019-12-12T12:00:47.348654188+01:00" level=info msg="shim reaped" id=adb62738eb9315a239aac02d981ca0d5afbb7d66d99a977b6d9db134036df94d
Dec 12 12:00:47 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:47.358609610+01:00" level=error msg="stream copy error: reading from a closed fifo"
Dec 12 12:00:47 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:47.358609645+01:00" level=error msg="stream copy error: reading from a closed fifo"
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: docker0: port 5(veth90db994) entered disabled state
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: device veth90db994 left promiscuous mode
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: docker0: port 5(veth90db994) entered disabled state
Dec 12 12:00:47 linux.hostname.placeholder.it nomad[1733]: 2019-12-12T12:00:47.458+0100 [INFO ] client.driver_mgr.docker: created container: driver=docker container_id=014a76bce64d20765a5bf2dc5b32fdb990e53a80a2fe3ea26343d88a62d41321
Dec 12 12:00:47 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:47.461382343+01:00" level=warning msg="adb62738eb9315a239aac02d981ca0d5afbb7d66d99a977b6d9db134036df94d cleanup: failed to unmount IPC: umount /var/lib/docker/containers/adb62738eb9315a239aac02d981ca0d5afbb7d66d99a977b6d9db134036df94d/mounts/shm, flags: 0x2: no such file or directory"
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: docker0: port 5(vethc153a0c) entered blocking state
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: docker0: port 5(vethc153a0c) entered disabled state
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: device vethc153a0c entered promiscuous mode
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: IPv6: ADDRCONF(NETDEV_UP): vethc153a0c: link is not ready
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: docker0: port 5(vethc153a0c) entered blocking state
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: docker0: port 5(vethc153a0c) entered forwarding state
Dec 12 12:00:47 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:47.542195860+01:00" level=error msg="adb62738eb9315a239aac02d981ca0d5afbb7d66d99a977b6d9db134036df94d cleanup: failed to delete container from containerd: no such container"
Dec 12 12:00:47 linux.hostname.placeholder.it nomad[1733]: 2019-12-12T12:00:47.542+0100 [ERROR] client.driver_mgr.docker: failed to start container: driver=docker container_id=adb62738eb9315a239aac02d981ca0d5afbb7d66d99a977b6d9db134036df94d error="API error (500): OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused \"mkdir /sys/fs/cgroup/memory/docker/adb62738eb9315a239aac02d981ca0d5afbb7d66d99a977b6d9db134036df94d: cannot allocate memory\"": unknown"
Dec 12 12:00:47 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:47.542233359+01:00" level=error msg="Handler for POST /containers/adb62738eb9315a239aac02d981ca0d5afbb7d66d99a977b6d9db134036df94d/start returned error: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:297: applying cgroup configuration for process caused \\\"mkdir /sys/fs/cgroup/memory/docker/adb62738eb9315a239aac02d981ca0d5afbb7d66d99a977b6d9db134036df94d: cannot allocate memory\\\"\": unknown"
Dec 12 12:00:47 linux.hostname.placeholder.it containerd[1753]: time="2019-12-12T12:00:47.551852060+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/014a76bce64d20765a5bf2dc5b32fdb990e53a80a2fe3ea26343d88a62d41321/shim.sock" debug=false pid=106820
Dec 12 12:00:47 linux.hostname.placeholder.it nomad[1733]: 2019-12-12T12:00:47.658+0100 [ERROR] client.alloc_runner.task_runner: running driver failed: alloc_id=0f03d341-2db7-ef1f-ac3d-b46729121047 task=core-drones-sensor error="Failed to start container adb62738eb9315a239aac02d981ca0d5afbb7d66d99a977b6d9db134036df94d: API error (500): OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused \"mkdir /sys/fs/cgroup/memory/docker/adb62738eb9315a239aac02d981ca0d5afbb7d66d99a977b6d9db134036df94d: cannot allocate memory\"": unknown"
Dec 12 12:00:47 linux.hostname.placeholder.it nomad[1733]: 2019-12-12T12:00:47.658+0100 [INFO ] client.alloc_runner.task_runner: restarting task: alloc_id=0f03d341-2db7-ef1f-ac3d-b46729121047 task=core-drones-sensor reason="Restart within policy" delay=15.333442815s
Dec 12 12:00:47 linux.hostname.placeholder.it containerd[1753]: time="2019-12-12T12:00:47.685596667+01:00" level=info msg="shim reaped" id=014a76bce64d20765a5bf2dc5b32fdb990e53a80a2fe3ea26343d88a62d41321
Dec 12 12:00:47 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:47.695890735+01:00" level=error msg="stream copy error: reading from a closed fifo"
Dec 12 12:00:47 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:47.695939782+01:00" level=error msg="stream copy error: reading from a closed fifo"
Dec 12 12:00:47 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:47.757933654+01:00" level=warning msg="Error getting v2 registry: Get https://registry:5000/v2/: http: server gave HTTP response to HTTPS client"
Dec 12 12:00:47 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:47.758010520+01:00" level=info msg="Attempting next endpoint for pull after error: Get https://registry:5000/v2/: http: server gave HTTP response to HTTPS client"
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: docker0: port 5(vethc153a0c) entered disabled state
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: device vethc153a0c left promiscuous mode
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: docker0: port 5(vethc153a0c) entered disabled state
Dec 12 12:00:47 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:47.819093578+01:00" level=warning msg="014a76bce64d20765a5bf2dc5b32fdb990e53a80a2fe3ea26343d88a62d41321 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/014a76bce64d20765a5bf2dc5b32fdb990e53a80a2fe3ea26343d88a62d41321/mounts/shm, flags: 0x2: no such file or directory"
Dec 12 12:00:47 linux.hostname.placeholder.it nomad[1733]: 2019-12-12T12:00:47.907+0100 [INFO ] client.driver_mgr.docker: created container: driver=docker container_id=44c9cef26857d695932d6d66ea218ea2a8c081732b5b3305fea7e540a65c2331
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: docker0: port 5(veth70dc187) entered blocking state
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: docker0: port 5(veth70dc187) entered disabled state
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: device veth70dc187 entered promiscuous mode
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: IPv6: ADDRCONF(NETDEV_UP): veth70dc187: link is not ready
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: docker0: port 5(veth70dc187) entered blocking state
Dec 12 12:00:47 linux.hostname.placeholder.it kernel: docker0: port 5(veth70dc187) entered forwarding state
Dec 12 12:00:47 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:47.939283148+01:00" level=error msg="014a76bce64d20765a5bf2dc5b32fdb990e53a80a2fe3ea26343d88a62d41321 cleanup: failed to delete container from containerd: no such container"
Dec 12 12:00:47 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:47.939366568+01:00" level=error msg="Handler for POST /containers/014a76bce64d20765a5bf2dc5b32fdb990e53a80a2fe3ea26343d88a62d41321/start returned error: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:297: applying cgroup configuration for process caused \\\"mkdir /sys/fs/cgroup/memory/docker/014a76bce64d20765a5bf2dc5b32fdb990e53a80a2fe3ea26343d88a62d41321: cannot allocate memory\\\"\": unknown"
Dec 12 12:00:47 linux.hostname.placeholder.it containerd[1753]: time="2019-12-12T12:00:47.993997045+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/44c9cef26857d695932d6d66ea218ea2a8c081732b5b3305fea7e540a65c2331/shim.sock" debug=false pid=106883
Dec 12 12:00:48 linux.hostname.placeholder.it containerd[1753]: time="2019-12-12T12:00:48.095195175+01:00" level=info msg="shim reaped" id=44c9cef26857d695932d6d66ea218ea2a8c081732b5b3305fea7e540a65c2331
Dec 12 12:00:48 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:48.105262650+01:00" level=error msg="stream copy error: reading from a closed fifo"
Dec 12 12:00:48 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:48.105305452+01:00" level=error msg="stream copy error: reading from a closed fifo"
Dec 12 12:00:48 linux.hostname.placeholder.it kernel: docker0: port 5(veth70dc187) entered disabled state
Dec 12 12:00:48 linux.hostname.placeholder.it kernel: docker0: port 6(veth74cd792) entered blocking state
Dec 12 12:00:48 linux.hostname.placeholder.it kernel: docker0: port 6(veth74cd792) entered disabled state
Dec 12 12:00:48 linux.hostname.placeholder.it kernel: device veth74cd792 entered promiscuous mode
Dec 12 12:00:48 linux.hostname.placeholder.it kernel: IPv6: ADDRCONF(NETDEV_UP): veth74cd792: link is not ready
Dec 12 12:00:48 linux.hostname.placeholder.it kernel: docker0: port 6(veth74cd792) entered blocking state
Dec 12 12:00:48 linux.hostname.placeholder.it kernel: docker0: port 6(veth74cd792) entered forwarding state
Dec 12 12:00:48 linux.hostname.placeholder.it kernel: docker0: port 5(veth70dc187) entered disabled state
Dec 12 12:00:48 linux.hostname.placeholder.it kernel: device veth70dc187 left promiscuous mode
Dec 12 12:00:48 linux.hostname.placeholder.it kernel: docker0: port 5(veth70dc187) entered disabled state
Dec 12 12:00:48 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:48.211845631+01:00" level=warning msg="44c9cef26857d695932d6d66ea218ea2a8c081732b5b3305fea7e540a65c2331 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/44c9cef26857d695932d6d66ea218ea2a8c081732b5b3305fea7e540a65c2331/mounts/shm, flags: 0x2: no such file or directory"
Dec 12 12:00:48 linux.hostname.placeholder.it containerd[1753]: time="2019-12-12T12:00:48.247687889+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/014a76bce64d20765a5bf2dc5b32fdb990e53a80a2fe3ea26343d88a62d41321/shim.sock" debug=false pid=106961
Dec 12 12:00:48 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:48.301734493+01:00" level=error msg="44c9cef26857d695932d6d66ea218ea2a8c081732b5b3305fea7e540a65c2331 cleanup: failed to delete container from containerd: no such container"
Dec 12 12:00:48 linux.hostname.placeholder.it dockerd[1869]: time="2019-12-12T12:00:48.301789037+01:00" level=error msg="Handler for POST /containers/44c9cef26857d695932d6d66ea218ea2a8c081732b5b3305fea7e540a65c2331/start returned error: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:297: applying cgroup configuration for process caused \\\"mkdir /sys/fs/cgroup/memory/docker/44c9cef26857d695932d6d66ea218ea2a8c081732b5b3305fea7e540a65c2331: cannot allocate memory\\\"\": unknown"

@jpmenil
Copy link

jpmenil commented Dec 13, 2019

Must be fixed with kernel kernel-3.10.0-1075.el7

@hrnjan
Copy link

hrnjan commented Dec 17, 2019

Same issue here:
CentOS Linux release 7.7.1908
Kernel 3.10.0-1062.9.1.el7.x86_64
Docker version 19.03.5, build 633a0ea
130+ containers (pods in k8s)

To resolve this issue we are going to replace the kernel with kernel-lt 4.4.206 from elrepo. We are still using iptables, so first we will need to reconfigure our hosts for nftables usage.
Let us know if you find some kind of workaround for this issue.

@JakeBonek
Copy link
Author

Same issue here:
CentOS Linux release 7.7.1908
Kernel 3.10.0-1062.9.1.el7.x86_64
Docker version 19.03.5, build 633a0ea
130+ containers (pods in k8s)

To resolve this issue we are going to replace the kernel with kernel-lt 4.4.206 from elrepo. We are still using iptables, so first we will need to reconfigure our hosts for nftables usage.
Let us know if you find some kind of workaround for this issue.

Just so you know, we've tried with various 4.x kernels as well and had the same issue.

@hrnjan
Copy link

hrnjan commented Dec 17, 2019

Same issue here:
CentOS Linux release 7.7.1908
Kernel 3.10.0-1062.9.1.el7.x86_64
Docker version 19.03.5, build 633a0ea
130+ containers (pods in k8s)
To resolve this issue we are going to replace the kernel with kernel-lt 4.4.206 from elrepo. We are still using iptables, so first we will need to reconfigure our hosts for nftables usage.
Let us know if you find some kind of workaround for this issue.

Just so you know, we've tried with various 4.x kernels as well and had the same issue.

Can you list affected 4.x kernels please? Thank you!
We need to fix this so finding the 'right' kernel is the only way as I can see.

@jpmenil
Copy link

jpmenil commented Dec 19, 2019

It took me around a week to trigger the issue until i reboot the host.
If anyone can trig this issue faster than me, possible to test with the following kernel parameter:
'cgroup.memory=nokmem'

@kanthasamyraja
Copy link

It took me around a week to trigger the issue until i reboot the host.
If anyone can trig this issue faster than me, possible to test with the following kernel parameter:
'cgroup.memory=nokmem'

I am also facing this issue with mentioned docker(19.03.5) and kernel(kernel-3.10.0-1062) version on RHEL 7.7

Could you also provide where should I add this parameter?

@jpmenil
Copy link

jpmenil commented Jan 6, 2020

@kanthasamyraja edit etc/default/grub then update the grub config

@TBBle
Copy link

TBBle commented Jan 13, 2020

@kanthasamyraja: Note that the fix for https://bugzilla.redhat.com/show_bug.cgi?id=1507149 is not in kernel-3.10.0-1062, it's in kernel-3.10.0-1062.4.1 or later. If you're on CentOS 7, the required kernel is in the CentOS Updates repository, not the CentOS Base repository, which should be enabled by default.

Per https://bugzilla.redhat.com/show_bug.cgi?id=1507149#c131 there is possibly a different bug that affects later kernels as well, which is what this ticket was reopened for by @jpmenil .

So if your kernel version was accurate, you should first upgrade to kernel-3.10.0-1062.4.1 to rule out https://bugzilla.redhat.com/show_bug.cgi?id=1507149.

Or you can distinguish them as when the newer issue hits,

meminfo data doesn't suggest a bloated slab usage, but a bloated page-cache usage instead.

@kanthasamyraja
Copy link

@kanthasamyraja: Note that the fix for https://bugzilla.redhat.com/show_bug.cgi?id=1507149 is not in kernel-3.10.0-1062, it's in kernel-3.10.0-1062.4.1 or later. If you're on CentOS 7, the required kernel is in the CentOS Updates repository, not the CentOS Base repository, which should be enabled by default.

Per https://bugzilla.redhat.com/show_bug.cgi?id=1507149#c131 there is possibly a different bug that affects later kernels as well, which is what this ticket was reopened for by @jpmenil .

So if your kernel version was accurate, you should first upgrade to kernel-3.10.0-1062.4.1 to rule out https://bugzilla.redhat.com/show_bug.cgi?id=1507149.

Or you can distinguish them as when the newer issue hits,

meminfo data doesn't suggest a bloated slab usage, but a bloated page-cache usage instead.

It is working now for me. I am using below version. (RHEL7.7.)

$ sudo rpm -qa | grep kernel-3.10.0-1062
kernel-3.10.0-1062.9.1.el7.x86_64
kernel-3.10.0-1062.4.3.el7.x86_64
kernel-3.10.0-1062.7.1.el7.x86_64
$

Thanks for the information.

@jpmenil
Copy link

jpmenil commented Jan 22, 2020

@thaJeztah, i think we can close (again) this one, since adding the cgroup.memory=nokmem kernel parameter do the trick.

@bamb00
Copy link

bamb00 commented Feb 5, 2020

@jpmenil I'm running RH 7.6 3.10.0-957.1.3.el7.x86_6 and just want to be sure on applying the fix.

1 - Set the kernel parameter (cgroup.memory=nokmem) in /etc/default/grub
2 - Upgrade to kernel-3.10.0-1062.4.1.el7.x86_64 or higher
3 - I'm running docker version 18.06.1-ce. Do I need to upgrade docker?

Any additional steps not listed above?

Thanks in Advanced.

@cofyc
Copy link

cofyc commented Feb 6, 2020

hi, if you leaked too much memory cgroups, new memory cgroup cannot be created and will fail with "Cannot allocate memory".
You can check if there are some empty cgroups in /sys/fs/cgroup/memory.

@jpmenil
Copy link

jpmenil commented Feb 6, 2020

@bamb00 only the kernel parameter is needed. no need to upgrade docker.

@cofyc
Copy link

cofyc commented Feb 7, 2020

@jpmenil Thanks! verified that it works when cgroup.nokmem is configured.

In https://bugzilla.redhat.com/show_bug.cgi?id=1507149, they mentioned that the issue has been fixed in kernel-3.10.0-1075.el7. Did anyone verify it?

@mayconritzmann
Copy link

Hello, today I had the same problem in the production of my environment.

My kernel was kernel-3.10.0-1062.9.1, after upgrading to kernel-3.10.0-1062.12.1, all containers started

Does anyone have any other alternative? This problem node is part of a k8s cluster.

@stemid
Copy link

stemid commented Nov 9, 2020

I had this issue out of the blue on an otherwise idle k8s v18 cluster, with a pretty recent CentOS 7 kernel, did an upgrade to the latest packages, added cgroup.memory=nokmem to boot params with grubby and haven't seen the issue since the reboot.

The upgrade was docker-ce 19.03.12-3 => 19.03.13-3 and kernel 3.10.0-1127.13.1 => 3.10.0-1127.19.1.

@llhuii
Copy link

llhuii commented Nov 12, 2020

I had this issue with this kernel version:

[root@master debug]# uname -a
Linux master 3.10.0-1127.13.1.el7.x86_64 #1 SMP Tue Jun 23 15:46:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
[root@master debug]# lsb_release -a
LSB Version:    :core-4.1-amd64:core-4.1-noarch
Distributor ID: CentOS
Description:    CentOS Linux release 7.8.2003 (Core)
Release:        7.8.2003
Codename:       Core

docker server version 19.03.12

@fusionx86
Copy link

Are you all adding the cgroup.memory kernel parameter to master nodes as well? Seems to only apply to nodes where deployments are scheduled, but for consistency, I'm wondering about the master nodes as well.

@GaboFDC
Copy link

GaboFDC commented Dec 1, 2020

On all redhat related distributions, it may also be something related to the enablement of cgroupsv2.
see https://www.redhat.com/sysadmin/fedora-31-control-group-v2
and https://www.linuxuprising.com/2019/11/how-to-install-and-use-docker-on-fedora.html

@BrianSidebotham
Copy link

BrianSidebotham commented Jan 5, 2021

I'm here with this error and it's because from Fedora >= 31 has moved to cgroups v2. Using podman with the podman-docker interface works OK, except of course containers need to also support cgroups v2 and CentOS 7 does not. :(

@b-rohit
Copy link

b-rohit commented Jan 13, 2021

I have the same issue on Ubuntu 18.04

  Operating System: Ubuntu 18.04.5 LTS
            Kernel: Linux 4.15.0
      Architecture: x86-64
Client: Docker Engine - Community
 Version:           20.10.2
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        2291f61
 Built:             Mon Dec 28 16:17:32 2020
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          19.03.11
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.10
  Git commit:       42e35e61f3
  Built:            Mon Jun  1 09:10:54 2020

@dignajar
Copy link

I'm facing the same issue, but I'm not sure if the issue came from cgroup memory.

I tried to create my self cgroups and delete them and works fine, but still have the issue.

Logs from the Kubernetes node

Jan 19 13:15:43 xxxxxx kubelet[9279]: E0119 13:15:43.049088    9279 pod_workers.go:191] Error syncing pod e886905b-acf0-47df-8c5d-b20b07e7a824 ("xxxxxx(e886905b-acf0-47df-8c5d-b20b07e7a824)"), skipping: failed to ensure that the pod: e886905b-acf0-47df-8c5d-b20b07e7a824 cgroups exist and are correctly applied: failed to create container for [kubepods burstable pode886905b-acf0-47df-8c5d-b20b07e7a824] : mkdir /sys/fs/cgroup/memory/kubepods/burstable/pode886905b-acf0-47df-8c5d-b20b07e7a824: cannot allocate memory

Kernel

Centos 7 - 3.10.0-1127.19.1.el7.x86_64

Disabling the memory accounting with the kernel parameter cgroup.memory = nokmem could produce some overflow ?

@bcookatpcsd
Copy link

Fedora 33 Server here.. brand new install tonight. I added the kernel parameter with the fedora supplied docker and could not get hello-world to work. https://docs.docker.com/engine/install/fedora/ , removes fedora supplied docker and replaces it.. rebooted and removed the kernel parameter, docker images needed to be rm b/c of overlay.. but after docker rm image; "things seem ok so far" (tm)

@tandrez
Copy link

tandrez commented Feb 11, 2021

Hi

I have the same problem with a kernel version that is later than the one that is supposed to fix this bug (kernel-3.10.0-1075.el7).

Kubernetes log:

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "pod-1613044800-69668": Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused \"mkdir /sys/fs/cgroup/memory/kubepods/besteffort/pod3fc045c9-efae-4bb5-a7ab-2a6fd666dc8c/58321a27c823ea937069c7b13a6998bbc191f41c4e1c9177214d9709364fec3c: cannot allocate memory\"": unknown
# cat /etc/redhat-release
CentOS Linux release 7.8.2003 (Core)

# uname -r
3.10.0-1127.8.2.el7.x86_64

# docker version
Client: Docker Engine - Community
 Version:           19.03.9
 API version:       1.40
 Go version:        go1.13.10
 Git commit:        9d988398e7
 Built:             Fri May 15 00:25:27 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.8
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       afacb8b
  Built:            Wed Mar 11 01:25:42 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

# kubectl version
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.6", GitCommit:"d32e40e20d167e103faf894261614c5b45c44198", GitTreeState:"clean", BuildDate:"2020-05-20T13:08:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

As far as I understand, the workaround to disable cgroup kernel memory accounting is not safe. Am I right here?

@real-felix
Copy link

Fedora 33 Server here.. brand new install tonight. I added the kernel parameter with the fedora supplied docker and could not get hello-world to work. https://docs.docker.com/engine/install/fedora/ , removes fedora supplied docker and replaces it.. rebooted and removed the kernel parameter, docker images needed to be rm b/c of overlay.. but after docker rm image; "things seem ok so far" (tm)

Thank you! TL;DR: the version in the Fedora repository (33 as for now) is legacy. Install docker-ce from the docker repository.

@bcookatpcsd
Copy link

Much happiness with this Fedora system vs the Clear system I previously was running..

Just a homelab..

root@fedora ~# docker version
Client: Docker Engine - Community
Version: 20.10.3
API version: 1.41
Go version: go1.13.15
Git commit: 48d30b5
Built: Fri Jan 29 14:33:58 2021
OS/Arch: linux/amd64
Context: default
Experimental: true

Server: Docker Engine - Community
Engine:
Version: 20.10.3
API version: 1.41 (minimum version 1.12)
Go version: go1.13.15
Git commit: 46229ca
Built: Fri Jan 29 14:31:38 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.3
GitCommit: 269548fa27e0089a8b8278fc4fc781d7f65a939b
runc:
Version: 1.0.0-rc92
GitCommit: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
docker-init:
Version: 0.19.0
GitCommit: de40ad0
root@fedora ~# uname -r
5.10.17-200.fc33.x86_64
root@fedora ~# cat /proc/cmdline
BOOT_IMAGE=(hd0,gpt2)/boot/vmlinuz-5.10.17-200.fc33.x86_64 root=UUID=448267b7-614f-43d2-a0cc-72113faa7d10 ro resume=UUID=9239ee84-9014-489b-9350-393baa12ee38 rhgb quiet mitigations=off systemd.unified_cgroup_hierarchy=0
root@fedora ~# rpm -qa | grep docker
lazydocker-0.10-1.el7.harbottle.x86_64
docker-ce-cli-20.10.3-3.fc33.x86_64
docker-ce-rootless-extras-20.10.3-3.fc33.x86_64
docker-ce-20.10.3-3.fc33.x86_64
python3-dockerpty-0.4.1-20.fc33.noarch
python3-docker-4.3.1-1.fc33.noarch
python3-docker+ssh-4.3.1-1.fc33.noarch
docker-compose-1.27.4-1.fc33.noarch
python3-dockerfile-parse-0.0.13-7.fc33.noarch

I see the unified_cgroup_hierarchy is still an argument.. I'll remove it and confirm..

@danielefranceschi
Copy link

still present in centos 7, kernel 3.10.0-1160.el7.x86_64

@bcookatpcsd
Copy link

Just found this by accident.. have not tried or tested..

https://wiki.voidlinux.org/Docker

(void does not use systemd fwiw)

Troubleshooting
cgroups bug
Docker seems to require systemd cgroups to be mounted on /sys/fs/cgroup/systemd.

You may get the following error while running docker:

$ docker: Error response from daemon: cgroups: cannot found cgroup mount destination: unknown.
To fix the error, create the directory, and mount systemd cgroups there:

# mkdir /sys/fs/cgroup/systemd
# mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd

(void is a rolling release..)

So on the current version of Void..

[I] bcook@void30 ~> uname -a
Linux void30 5.10.22_1 #1 SMP 1615288648 x86_64 GNU/Linux
[I] bcook@void30 ~> docker version 
Client:
 Version:           19.03.15
 API version:       1.40
 Go version:        go1.15.7
 Git commit:        v19.03.15
 Built:             Fri Feb  5 23:20:42 2021
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.15
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.15.7
  Git commit:       v19.03.15
  Built:            Fri Feb  5 23:20:42 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.3
  GitCommit:        UNSET
 runc:
  Version:          spec: 1.0.2-dev
  GitCommit:        
 docker-init:
  Version:          0.18.0
  GitCommit:        
[I] bcook@void30 ~> cat /proc/cmdline 
BOOT_IMAGE=/boot/vmlinuz-5.10.22_1 root=UUID=4d191e11-7108-4e21-ad03-556bec5430d1 ro loglevel=4 slub_debug=P page_poison=1 mitigations=off

From Docker docs..

https://docs.docker.com/engine/install/binaries/

Prerequisites
Before attempting to install Docker from binaries, be sure your host machine meets the prerequisites:

A 64-bit installation
Version 3.10 or higher of the Linux kernel. The latest version of the kernel available for your platform is recommended.
iptables version 1.4 or higher
git version 1.7 or higher
A ps executable, usually provided by procps or a similar package.
XZ Utils 4.9 or higher
A properly mounted cgroupfs hierarchy; a single, all-encompassing cgroup mount point is not sufficient. See Github issues #2683, #3485, #4568).

https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount

moby/moby#2683

moby/moby#3485

moby/moby#4568

My 0.02..

I'm sure there are reasons to run 'latest and greatest' docker.. 20.10.x but I'm quite happy with the minimal overhead from void paired with the usefulness of the base packages.. to get docker containers going.. fwiw, haproxy in docker gets destroyed cpu-wise for some reason.. installed haproxy in void base.. back to a sleeping giant..

Have void running in esxi and bare metal..

YMMV

@hexagonrecursion
Copy link

According to this https://src.fedoraproject.org/rpms/moby-engine fedora 34 has moby-engine-20.10.5, but fedora 33 and fedora 32 are still stuck with 19.03.x at the time of writing

@gjkim42
Copy link

gjkim42 commented Apr 19, 2021

This issue is still reproduced in the following environment without kernel parameter cgroup.memory=nokmem

$ uname -r
3.10.0-1160.24.1.el7.x86_64
$ cat /etc/centos-release
CentOS Linux release 7.6.1810 (Core)

@abhiTamrakar
Copy link

+1 to 3.10.0-1160.24.1.el7.x86_64 having the same issue

@mdonges
Copy link

mdonges commented Jun 18, 2021

Same problem here:

Starting apache_db_1 ... error

ERROR: for apache_db_1 Cannot start service db: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:385: applying cgroup configuration for process caused: mkdir /sys/fs/cgroup/memory/docker/dd0e00f46b0c794d48f612d717858a39a060cd1496cfe152b0844d80239da588: cannot allocate memory: unknown

docker version
Client: Docker Engine - Community
Version: 20.10.7
API version: 1.41
Go version: go1.13.15
Git commit: f0df350
Built: Wed Jun 2 11:56:40 2021
OS/Arch: linux/amd64
Context: default
Experimental: true

Server: Docker Engine - Community
Engine:
Version: 20.10.7
API version: 1.41 (minimum version 1.12)
Go version: go1.13.15
Git commit: b0f5bc3
Built: Wed Jun 2 11:54:48 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.6
GitCommit: d71fcd7d8303cbf684402823e425e9dd2e99285d
runc:
Version: 1.0.0-rc95
GitCommit: b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
docker-init:
Version: 0.19.0
GitCommit: de40ad0

@dontspamterry
Copy link

dontspamterry commented Aug 6, 2021

Same issue too:

cat /etc/centos-release

CentOS Linux release 7.9.2009 (Core)

uname -r

3.10.0-1160.24.1.el7.x86_64

docker version

Client: Docker Engine - Community
 Version:           19.03.6
 API version:       1.40
 Go version:        go1.12.16
 Git commit:        369ce74a3c
 Built:             Thu Feb 13 01:29:29 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.6
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.16
  Git commit:       369ce74a3c
  Built:            Thu Feb 13 01:28:07 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.4
  GitCommit:        05f951a3781f4f2c1911b05e61c160e9c30eaa8e
 runc:
  Version:          1.0.0-rc93
  GitCommit:        12644e614e25b05da6fd08a38ffa0cfe1903fdec
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

@yogeshkumark
Copy link

Facing same issue
[docker@kfdct069 config]$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.9 (Maipo)
[docker@kfdct069 config]$ uname -r
3.10.0-1160.45.1.el7.x86_64
[docker@kfdct069 config]$ docker version
Client: Docker Engine - Community
Version: 19.03.12
API version: 1.40
Go version: go1.13.10
Git commit: 48a66213fe
Built: Mon Jun 22 15:46:54 2020
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 19.03.12
API version: 1.40 (minimum version 1.12)
Go version: go1.13.10
Git commit: 48a66213fe
Built: Mon Jun 22 15:45:28 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
[docker@kfdct069 config]$

@Piratenkanon
Copy link

good morning, i'm a linux newbie, but here the same problem, can anyone help me?

Linux Debian 10
Docker Version 1.5.2-231

uname -r

4.19.0

docker version

Client: Docker Engine - Community
Version: 20.10.12
API version: 1.41
Go version: go1.16.12
Git commit: e91ed57
Built: Mon Dec 13 11:45:37 2021
OS/Arch: linux/amd64
Context: default
Experimental: true

Server: Docker Engine - Community
Engine:
Version: 20.10.12
API version: 1.41 (minimum version 1.12)
Go version: go1.16.12
Git commit: 459d0df
Built: Mon Dec 13 11:43:46 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.12
GitCommit: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
runc:
Version: 1.0.2
GitCommit: v1.0.2-0-g52b36a2
docker-init:
Version: 0.19.0
GitCommit: de40ad0

fails starting containers with error:
Error: {"message":"OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:385: applying cgroup configuration for process caused: mkdir /sys/fs/cgroup/memory/docker: cannot allocate memory: unknown"}

For help also mail to [email protected]

Thanks in advance,
Martin

@bcookatpcsd
Copy link

Debian 10 has 4.19 kernel? (could be..)

https://docs.docker.com/engine/install/debian/

@Piratenkanon
Copy link

Debian 10 has 4.19 kernel? (could be..)

https://docs.docker.com/engine/install/debian/

yes,
uname -r
4.19.0

greetings, Martin

@takhello
Copy link

takhello commented Apr 18, 2022

A complete log:
"Can not find '/sys/fs/cgroup/memory/docker/cc490e2a6526876365364a30e3f839f223dd1aa3e3fc13d0db5e713dad8cd1b7'"

2022-04-18 05:13:59,633+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.pax.logging.NexusLogActivator - start
2022-04-18 05:13:59,698+0000 WARN [CM Event Dispatcher (Fire ConfigurationEvent: pid=jmx.acl)] *SYSTEM org.apache.felix.fileinstall - File is not writeable: file:/opt/sonatype/nexus/etc/karaf/jmx.acl.cfg
2022-04-18 05:13:59,700+0000 WARN [CM Event Dispatcher (Fire ConfigurationEvent: pid=org.apache.karaf.log)] *SYSTEM org.apache.felix.fileinstall - File is not writeable: file:/opt/sonatype/nexus/etc/karaf/org.apache.karaf.log.cfg
2022-04-18 05:13:59,703+0000 WARN [CM Event Dispatcher (Fire ConfigurationEvent: pid=org.apache.karaf.features)] *SYSTEM org.apache.felix.fileinstall - File is not writeable: file:/opt/sonatype/nexus/etc/karaf/org.apache.karaf.features.cfg
2022-04-18 05:13:59,706+0000 WARN [CM Event Dispatcher (Fire ConfigurationEvent: pid=org.ops4j.pax.url.mvn)] *SYSTEM org.apache.felix.fileinstall - File is not writeable: file:/opt/sonatype/nexus/etc/karaf/org.ops4j.pax.url.mvn.cfg
2022-04-18 05:13:59,708+0000 WARN [CM Event Dispatcher (Fire ConfigurationEvent: pid=org.apache.felix.fileinstall~deploy)] *SYSTEM org.apache.felix.fileinstall - File is not writeable: file:/opt/sonatype/nexus/etc/karaf/org.apache.felix.fileinstall-deploy.cfg
2022-04-18 05:13:59,715+0000 WARN [CM Event Dispatcher (Fire ConfigurationEvent: pid=profile)] *SYSTEM org.apache.felix.fileinstall - File is not writeable: file:/opt/sonatype/nexus/etc/karaf/profile.cfg
2022-04-18 05:13:59,716+0000 WARN [CM Event Dispatcher (Fire ConfigurationEvent: pid=org.apache.karaf.kar)] *SYSTEM org.apache.felix.fileinstall - File is not writeable: file:/opt/sonatype/nexus/etc/karaf/org.apache.karaf.kar.cfg
2022-04-18 05:13:59,717+0000 WARN [CM Event Dispatcher (Fire ConfigurationEvent: pid=org.apache.karaf.shell)] *SYSTEM org.apache.felix.fileinstall - File is not writeable: file:/opt/sonatype/nexus/etc/karaf/org.apache.karaf.shell.cfg
2022-04-18 05:13:59,719+0000 WARN [CM Event Dispatcher (Fire ConfigurationEvent: pid=org.apache.karaf.service.acl.command)] *SYSTEM org.apache.felix.fileinstall - File is not writeable: file:/opt/sonatype/nexus/etc/karaf/org.apache.karaf.service.acl.command.cfg
2022-04-18 05:13:59,721+0000 WARN [CM Event Dispatcher (Fire ConfigurationEvent: pid=org.apache.karaf.management)] *SYSTEM org.apache.felix.fileinstall - File is not writeable: file:/opt/sonatype/nexus/etc/karaf/org.apache.karaf.management.cfg
2022-04-18 05:13:59,722+0000 WARN [CM Event Dispatcher (Fire ConfigurationEvent: pid=org.apache.karaf.jaas)] *SYSTEM org.apache.felix.fileinstall - File is not writeable: file:/opt/sonatype/nexus/etc/karaf/org.apache.karaf.jaas.cfg
2022-04-18 05:13:59,723+0000 WARN [CM Event Dispatcher (Fire ConfigurationEvent: pid=org.ops4j.pax.logging)] *SYSTEM org.apache.felix.fileinstall - File is not writeable: file:/opt/sonatype/nexus/etc/karaf/org.ops4j.pax.logging.cfg
2022-04-18 05:13:59,892+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.features.internal.FeaturesWrapper - Fast FeaturesService starting
2022-04-18 05:14:00,491+0000 INFO [FelixStartLevel] *SYSTEM ROOT - bundle org.apache.felix.scr:2.1.30 (55) Starting with globalExtender setting: false
2022-04-18 05:14:00,494+0000 INFO [FelixStartLevel] *SYSTEM ROOT - bundle org.apache.felix.scr:2.1.30 (55) Version = 2.1.30
2022-04-18 05:14:00,790+0000 WARN [FelixStartLevel] *SYSTEM uk.org.lidalia.sysoutslf4j.context.SysOutOverSLF4JInitialiser - Your logging framework class org.ops4j.pax.logging.slf4j.Slf4jLogger is not known - if it needs access to the standard println methods on the console you will need to register it by calling registerLoggingSystemPackage
2022-04-18 05:14:00,792+0000 INFO [FelixStartLevel] *SYSTEM uk.org.lidalia.sysoutslf4j.context.SysOutOverSLF4J - Package org.ops4j.pax.logging.slf4j registered; all classes within it or subpackages of it will be allowed to print to System.out and System.err
2022-04-18 05:14:00,795+0000 INFO [FelixStartLevel] *SYSTEM uk.org.lidalia.sysoutslf4j.context.SysOutOverSLF4J - Replaced standard System.out and System.err PrintStreams with SLF4JPrintStreams
2022-04-18 05:14:00,797+0000 INFO [FelixStartLevel] *SYSTEM uk.org.lidalia.sysoutslf4j.context.SysOutOverSLF4J - Redirected System.out and System.err to SLF4J for this context
2022-04-18 05:14:00,801+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.ConfigurationBuilder - Properties:
2022-04-18 05:14:00,802+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.ConfigurationBuilder - application-host='0.0.0.0'
2022-04-18 05:14:00,802+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.ConfigurationBuilder - application-port='8081'
2022-04-18 05:14:00,802+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.ConfigurationBuilder - fabric.etc='/opt/sonatype/nexus/etc/fabric'
2022-04-18 05:14:00,803+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.ConfigurationBuilder - jetty.etc='/opt/sonatype/nexus/etc/jetty'
2022-04-18 05:14:00,803+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.ConfigurationBuilder - karaf.base='/opt/sonatype/nexus'
2022-04-18 05:14:00,803+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.ConfigurationBuilder - karaf.data='/nexus-data'
2022-04-18 05:14:00,803+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.ConfigurationBuilder - karaf.etc='/opt/sonatype/nexus/etc/karaf'
2022-04-18 05:14:00,803+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.ConfigurationBuilder - karaf.home='/opt/sonatype/nexus'
2022-04-18 05:14:00,804+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.ConfigurationBuilder - karaf.instances='/nexus-data/instances'
2022-04-18 05:14:00,804+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.ConfigurationBuilder - logback.etc='/opt/sonatype/nexus/etc/logback'
2022-04-18 05:14:00,804+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.ConfigurationBuilder - nexus-args='/opt/sonatype/nexus/etc/jetty/jetty.xml,/opt/sonatype/nexus/etc/jetty/jetty-http.xml,/opt/sonatype/nexus/etc/jetty/jetty-requestlog.xml'
2022-04-18 05:14:00,804+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.ConfigurationBuilder - nexus-context-path='/'
2022-04-18 05:14:00,804+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.ConfigurationBuilder - nexus-edition='nexus-pro-edition'
2022-04-18 05:14:00,804+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.ConfigurationBuilder - nexus-features='nexus-pro-feature'
2022-04-18 05:14:00,805+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.ConfigurationBuilder - nexus.clustered='false'
2022-04-18 05:14:00,805+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.ConfigurationBuilder - ssl.etc='/opt/sonatype/nexus/etc/ssl'
2022-04-18 05:14:00,805+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.Launcher - Java: 1.8.0_282, OpenJDK 64-Bit Server VM, Red Hat, Inc., 25.282-b08
2022-04-18 05:14:00,805+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.Launcher - OS: Linux, 3.10.0-514.el7.x86_64, amd64
2022-04-18 05:14:00,805+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.Launcher - User: nexus, en, /opt/sonatype/nexus
2022-04-18 05:14:00,806+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.Launcher - CWD: /opt/sonatype/nexus
2022-04-18 05:14:00,807+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.Launcher - TMP: /nexus-data/tmp
2022-04-18 05:14:00,810+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.jetty.JettyServer - Starting
2022-04-18 05:14:00,817+0000 INFO [FelixStartLevel] *SYSTEM org.eclipse.jetty.util.log - Logging initialized @2652ms to org.eclipse.jetty.util.log.Slf4jLog
2022-04-18 05:14:00,822+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.jetty.JettyServer - Applying configuration: file:/opt/sonatype/nexus/etc/jetty/jetty.xml
2022-04-18 05:14:00,922+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.jetty.JettyServer - Applying configuration: file:/opt/sonatype/nexus/etc/jetty/jetty-http.xml
2022-04-18 05:14:00,944+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.bootstrap.jetty.JettyServer - Applying configuration: file:/opt/sonatype/nexus/etc/jetty/jetty-requestlog.xml
2022-04-18 05:14:00,957+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.bootstrap.jetty.JettyServer - Starting: Server@3be8a548{STOPPED}[9.4.43.v20210629]
2022-04-18 05:14:00,960+0000 INFO [jetty-main-1] *SYSTEM org.eclipse.jetty.server.Server - jetty-9.4.43.v20210629; built: 2021-06-30T11:07:22.254Z; git: 526006ecfa3af7f1a27ef3a288e2bef7ea9dd7e8; jvm 1.8.0_282-b08
2022-04-18 05:14:01,001+0000 INFO [jetty-main-1] *SYSTEM org.eclipse.jetty.server.session - DefaultSessionIdManager workerName=node0
2022-04-18 05:14:01,001+0000 INFO [jetty-main-1] *SYSTEM org.eclipse.jetty.server.session - No SessionScavenger set, using defaults
2022-04-18 05:14:01,002+0000 INFO [jetty-main-1] *SYSTEM org.eclipse.jetty.server.session - node0 Scavenging every 660000ms
2022-04-18 05:14:01,008+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.bootstrap.osgi.BootstrapListener - Initializing
2022-04-18 05:14:01,013+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.bootstrap.osgi.BootstrapListener - Loading OSS Edition
2022-04-18 05:14:01,014+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.bootstrap.osgi.BootstrapListener - Installing: nexus-oss-edition/3.38.1.01 (nexus-orient/3.38.1.01)
2022-04-18 05:14:03,222+0000 INFO [jetty-main-1] *SYSTEM org.ehcache.core.osgi.EhcacheActivator - Detected OSGi Environment (core is in bundle: org.ehcache [137]): Using OSGi Based Service Loading
2022-04-18 05:14:03,494+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.bootstrap.osgi.BootstrapListener - Installed: nexus-oss-edition/3.38.1.01 (nexus-orient/3.38.1.01)
2022-04-18 05:14:03,848+0000 INFO [jetty-main-1] *SYSTEM org.apache.shiro.nexus.NexusWebSessionManager - Global session timeout: 1800000 ms
2022-04-18 05:14:03,848+0000 INFO [jetty-main-1] *SYSTEM org.apache.shiro.nexus.NexusWebSessionManager - Session-cookie prototype: name=NXSESSIONID
2022-04-18 05:14:03,885+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.common [3.38.1.01]
2022-04-18 05:14:03,981+0000 INFO [jetty-main-1] *SYSTEM org.hibernate.validator.internal.util.Version - HV000001: Hibernate Validator 6.2.0.Final
2022-04-18 05:14:04,135+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.common [3.38.1.01]
2022-04-18 05:14:04,135+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.hibernate.validator [6.2.0.Final]
2022-04-18 05:14:04,152+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.hibernate.validator [6.2.0.Final]
2022-04-18 05:14:04,153+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.cache [3.38.1.01]
2022-04-18 05:14:04,181+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.cache [3.38.1.01]
2022-04-18 05:14:04,182+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.supportzip-api [3.38.1.01]
2022-04-18 05:14:04,206+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.supportzip-api [3.38.1.01]
2022-04-18 05:14:04,207+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.crypto [3.38.1.01]
2022-04-18 05:14:04,299+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.crypto [3.38.1.01]
2022-04-18 05:14:04,300+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.security [3.38.1.01]
2022-04-18 05:14:04,453+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.security [3.38.1.01]
2022-04-18 05:14:04,453+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.thread [3.38.1.01]
2022-04-18 05:14:04,470+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.thread [3.38.1.01]
2022-04-18 05:14:04,471+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.scheduling [3.38.1.01]
2022-04-18 05:14:04,658+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.scheduling [3.38.1.01]
2022-04-18 05:14:04,658+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.blobstore [3.38.1.01]
2022-04-18 05:14:04,705+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.blobstore [3.38.1.01]
2022-04-18 05:14:04,706+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.apache.tika.core [1.26.0]
2022-04-18 05:14:04,721+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.apache.tika.core [1.26.0]
2022-04-18 05:14:04,722+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.jmx [3.38.1.01]
2022-04-18 05:14:04,740+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.jmx [3.38.1.01]
2022-04-18 05:14:04,741+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.blobstore-file [3.38.1.01]
2022-04-18 05:14:04,785+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.blobstore-file [3.38.1.01]
2022-04-18 05:14:04,786+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.capability [3.38.1.01]
2022-04-18 05:14:04,804+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.capability [3.38.1.01]
2022-04-18 05:14:04,804+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.commands [3.38.1.01]
2022-04-18 05:14:04,818+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.commands [3.38.1.01]
2022-04-18 05:14:04,818+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.email [3.38.1.01]
2022-04-18 05:14:04,829+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.email [3.38.1.01]
2022-04-18 05:14:04,830+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.httpclient [3.38.1.01]
2022-04-18 05:14:04,842+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.httpclient [3.38.1.01]
2022-04-18 05:14:04,842+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.servlet [3.38.1.01]
2022-04-18 05:14:04,852+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.servlet [3.38.1.01]
2022-04-18 05:14:04,853+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.datastore [3.38.1.01]
2022-04-18 05:14:04,880+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.datastore [3.38.1.01]
2022-04-18 05:14:04,881+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.orient [3.38.1.01]
2022-04-18 05:14:04,946+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.orient [3.38.1.01]
2022-04-18 05:14:04,947+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.base [3.38.1.01]
2022-04-18 05:14:05,079+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.internal.metrics.MetricsModule - Metrics support configured
2022-04-18 05:14:05,113+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.internal.metrics.MetricsModule - Metrics support configured
2022-04-18 05:14:05,479+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.base [3.38.1.01]
2022-04-18 05:14:05,479+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.upgrade [3.38.1.01]
2022-04-18 05:14:05,502+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.upgrade [3.38.1.01]
2022-04-18 05:14:05,503+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.extdirect [3.38.1.01]
2022-04-18 05:14:06,021+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.extdirect [3.38.1.01]
2022-04-18 05:14:06,022+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.siesta [3.38.1.01]
2022-04-18 05:14:06,071+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.siesta [3.38.1.01]
2022-04-18 05:14:06,072+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.rest-jackson2 [3.38.1.01]
2022-04-18 05:14:06,084+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.rest-jackson2 [3.38.1.01]
2022-04-18 05:14:06,084+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.swagger [3.38.1.01]
2022-04-18 05:14:06,104+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.swagger [3.38.1.01]
2022-04-18 05:14:06,104+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.rapture [3.38.1.01]
2022-04-18 05:14:06,185+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.rapture [3.38.1.01]
2022-04-18 05:14:06,185+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.quartz [3.38.1.01]
2022-04-18 05:14:06,235+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.quartz [3.38.1.01]
2022-04-18 05:14:06,235+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.oss-edition [3.38.1.01]
2022-04-18 05:14:06,245+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.oss-edition [3.38.1.01]
2022-04-18 05:14:06,246+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusContextListener - Running lifecycle phases [KERNEL, STORAGE, RESTORE, UPGRADE, SCHEMAS, EVENTS, SECURITY, SERVICES, CAPABILITIES, TASKS]
2022-04-18 05:13:32,863+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusLifecycleManager - Start KERNEL
2022-04-18 05:13:32,864+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.internal.log.LogbackLoggerOverrides - File: /nexus-data/etc/logback/logback-overrides.xml
2022-04-18 05:13:32,866+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.internal.log.LogbackLogManager - Configuring
2022-04-18 05:13:32,874+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusContextListener - Installing: [nexus-oss-feature/3.38.1.01, nexus-cma-feature/3.38.1.01, nexus-cma-extra/3.38.1.01, nexus-ossindex-plugin/3.38.1.01]
2022-04-18 05:13:47,044+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.extender.NexusContextListener - Installed: [nexus-oss-feature/3.38.1.01, nexus-cma-feature/3.38.1.01, nexus-cma-extra/3.38.1.01, nexus-ossindex-plugin/3.38.1.01]
2022-04-18 05:13:47,172+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.plugins.nexus-audit-plugin [3.38.1.01]
2022-04-18 05:13:47,398+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.plugins.nexus-audit-plugin [3.38.1.01]
2022-04-18 05:13:48,038+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.plugins.nexus-ssl-plugin [3.38.1.01]
2022-04-18 05:13:48,081+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.plugins.nexus-ssl-plugin [3.38.1.01]
2022-04-18 05:13:48,209+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.selector [3.38.1.01]
2022-04-18 05:13:48,244+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.selector [3.38.1.01]
2022-04-18 05:13:48,245+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.elasticsearch [3.38.1.01]
2022-04-18 05:13:48,265+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.elasticsearch [3.38.1.01]
2022-04-18 05:13:48,265+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.repository-content [3.38.1.01]
2022-04-18 05:13:48,803+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.repository-content [3.38.1.01]
2022-04-18 05:13:48,804+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.repository-config [3.38.1.01]
2022-04-18 05:13:48,857+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.repository-config [3.38.1.01]
2022-04-18 05:13:48,858+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.plugins.nexus-coreui-plugin [3.38.1.01]
2022-04-18 05:13:48,999+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.plugins.nexus-coreui-plugin [3.38.1.01]
2022-04-18 05:13:49,021+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.plugins.nexus-repository-httpbridge [3.38.1.01]
2022-04-18 05:13:49,066+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.plugins.nexus-repository-httpbridge [3.38.1.01]
2022-04-18 05:13:49,133+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.cleanup-config [3.38.1.01]
2022-04-18 05:13:49,157+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.cleanup-config [3.38.1.01]
2022-04-18 05:13:49,158+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.plugins.nexus-repository-maven [3.38.1.01]
2022-04-18 05:13:49,279+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.plugins.nexus-repository-maven [3.38.1.01]
2022-04-18 05:13:49,280+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.plugins.nexus-script-plugin [3.38.1.01]
2022-04-18 05:13:49,317+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.plugins.nexus-script-plugin [3.38.1.01]
2022-04-18 05:13:49,328+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.plugins.nexus-task-log-cleanup [3.38.1.01]
2022-04-18 05:13:49,339+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.plugins.nexus-task-log-cleanup [3.38.1.01]
2022-04-18 05:13:49,378+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.plugins.nexus-blobstore-s3 [3.38.1.01]
2022-04-18 05:13:49,578+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.plugins.nexus-blobstore-s3 [3.38.1.01]
2022-04-18 05:13:49,587+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.plugins.nexus-blobstore-tasks [3.38.1.01]
2022-04-18 05:13:49,605+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.plugins.nexus-blobstore-tasks [3.38.1.01]
2022-04-18 05:13:49,609+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.plugins.nexus-onboarding-plugin [3.38.1.01]
2022-04-18 05:13:49,623+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.plugins.nexus-onboarding-plugin [3.38.1.01]
2022-04-18 05:13:49,628+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.plugins.nexus-default-role-plugin [3.38.1.01]
2022-04-18 05:13:49,644+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.plugins.nexus-default-role-plugin [3.38.1.01]
2022-04-18 05:13:49,655+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.plugins.nexus-repository-apt [3.38.1.01]
2022-04-18 05:13:49,713+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.plugins.nexus-repository-apt [3.38.1.01]
2022-04-18 05:13:49,953+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.plugins.nexus-repository-raw [3.38.1.01]
2022-04-18 05:13:50,008+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.plugins.nexus-repository-raw [3.38.1.01]
2022-04-18 05:13:50,017+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.plugins.nexus-restore-apt [3.38.1.01]
2022-04-18 05:13:50,030+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.plugins.nexus-restore-apt [3.38.1.01]
2022-04-18 05:13:50,041+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.plugins.nexus-restore-maven [3.38.1.01]
2022-04-18 05:13:50,051+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.plugins.nexus-restore-maven [3.38.1.01]
2022-04-18 05:13:50,059+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.plugins.nexus-restore-raw [3.38.1.01]
2022-04-18 05:13:50,069+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.plugins.nexus-restore-raw [3.38.1.01]
2022-04-18 05:13:50,080+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.cleanup [3.38.1.01]
2022-04-18 05:13:50,105+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.cleanup [3.38.1.01]
2022-04-18 05:13:50,244+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.core [3.38.1.01]
2022-04-18 05:13:50,456+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.core [3.38.1.01]
2022-04-18 05:13:50,504+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-ldap-plugin [3.38.1.01]
2022-04-18 05:13:50,549+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-ldap-plugin [3.38.1.01]
2022-04-18 05:13:50,668+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-proui-plugin [3.38.1.01]
2022-04-18 05:13:50,684+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-proui-plugin [3.38.1.01]
2022-04-18 05:13:50,688+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-proximanova-plugin [3.38.1.01]
2022-04-18 05:13:50,697+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-proximanova-plugin [3.38.1.01]
2022-04-18 05:13:50,859+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING wrap_file_system_com_sonatype_insight_scan_insight-scanner-core_2.33.6-01_insight-scanner-core-2.33.6-01.jar [0.0.0]
2022-04-18 05:13:50,875+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED wrap_file_system_com_sonatype_insight_scan_insight-scanner-core_2.33.6-01_insight-scanner-core-2.33.6-01.jar [0.0.0]
2022-04-18 05:13:50,884+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING wrap_file_system_com_sonatype_insight_scan_insight-scanner-model-io_2.33.6-01_insight-scanner-model-io-2.33.6-01.jar [0.0.0]
2022-04-18 05:13:50,908+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED wrap_file_system_com_sonatype_insight_scan_insight-scanner-model-io_2.33.6-01_insight-scanner-model-io-2.33.6-01.jar [0.0.0]
2022-04-18 05:13:50,909+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-healthcheck-base [3.38.1.01]
2022-04-18 05:13:51,037+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-healthcheck-base [3.38.1.01]
2022-04-18 05:13:51,038+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING wrap_file_system_com_sonatype_licensing_license-bundle_1.6.0_license-bundle-1.6.0.jar [0.0.0]
2022-04-18 05:13:51,068+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED wrap_file_system_com_sonatype_licensing_license-bundle_1.6.0_license-bundle-1.6.0.jar [0.0.0]
2022-04-18 05:13:51,069+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.licensing-extension [3.38.1.01]
2022-04-18 05:13:51,095+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.licensing-extension [3.38.1.01]
2022-04-18 05:13:51,096+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-analytics-plugin [3.38.1.01]
2022-04-18 05:13:51,145+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-analytics-plugin [3.38.1.01]
2022-04-18 05:13:51,147+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-licensing-plugin [3.38.1.01]
2022-04-18 05:13:51,169+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-licensing-plugin [3.38.1.01]
2022-04-18 05:13:51,213+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-repository-npm [3.38.1.01]
2022-04-18 05:13:51,329+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-repository-npm [3.38.1.01]
2022-04-18 05:13:51,331+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-repository-nuget [3.38.1.01]
2022-04-18 05:13:51,691+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-repository-nuget [3.38.1.01]
2022-04-18 05:13:51,693+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-repository-rubygems [3.38.1.01]
2022-04-18 05:13:51,742+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-repository-rubygems [3.38.1.01]
2022-04-18 05:13:51,743+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING org.sonatype.nexus.rest-client [3.38.1.01]
2022-04-18 05:13:51,752+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED org.sonatype.nexus.rest-client [3.38.1.01]
2022-04-18 05:13:51,753+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-migration-plugin [3.38.1.01]
2022-04-18 05:13:51,863+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-migration-plugin [3.38.1.01]
2022-04-18 05:13:51,888+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-vulnerability-plugin [3.38.1.01]
2022-04-18 05:13:51,928+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-vulnerability-plugin [3.38.1.01]
2022-04-18 05:13:51,929+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-outreach-plugin [3.38.1.01]
2022-04-18 05:13:51,952+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-outreach-plugin [3.38.1.01]
2022-04-18 05:13:51,956+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-rutauth-plugin [3.38.1.01]
2022-04-18 05:13:51,969+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-rutauth-plugin [3.38.1.01]
2022-04-18 05:13:51,990+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-clm-oss-plugin [3.38.1.01]
2022-04-18 05:13:52,000+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-clm-oss-plugin [3.38.1.01]
2022-04-18 05:13:52,015+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-restore-nuget [3.38.1.01]
2022-04-18 05:13:52,028+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-restore-nuget [3.38.1.01]
2022-04-18 05:13:52,057+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-repository-docker [3.38.1.01]
2022-04-18 05:13:52,241+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-repository-docker [3.38.1.01]
2022-04-18 05:13:52,275+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-repository-yum [3.38.1.01]
2022-04-18 05:13:52,362+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-repository-yum [3.38.1.01]
2022-04-18 05:13:52,370+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-restore-yum [3.38.1.01]
2022-04-18 05:13:52,382+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-restore-yum [3.38.1.01]
2022-04-18 05:13:52,389+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-restore-docker [3.38.1.01]
2022-04-18 05:13:52,405+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-restore-docker [3.38.1.01]
2022-04-18 05:13:52,444+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING nexus-blobstore-azure-cloud [3.38.1.01]
2022-04-18 05:13:52,477+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED nexus-blobstore-azure-cloud [3.38.1.01]
2022-04-18 05:13:52,506+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-ahc-plugin [3.38.1.01]
2022-04-18 05:13:52,532+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-ahc-plugin [3.38.1.01]
2022-04-18 05:13:52,563+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-restore-npm [3.38.1.01]
2022-04-18 05:13:52,576+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-restore-npm [3.38.1.01]
2022-04-18 05:13:52,583+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-repository-helm [3.38.1.01]
2022-04-18 05:13:52,626+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-repository-helm [3.38.1.01]
2022-04-18 05:13:52,633+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-repository-gitlfs [3.38.1.01]
2022-04-18 05:13:52,657+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-repository-gitlfs [3.38.1.01]
2022-04-18 05:13:52,664+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-restore-helm [3.38.1.01]
2022-04-18 05:13:52,675+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-restore-helm [3.38.1.01]
2022-04-18 05:13:52,697+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-repository-pypi [3.38.1.01]
2022-04-18 05:13:52,766+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-repository-pypi [3.38.1.01]
2022-04-18 05:13:52,775+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-restore-pypi [3.38.1.01]
2022-04-18 05:13:52,788+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-restore-pypi [3.38.1.01]
2022-04-18 05:13:52,801+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-repository-conda [3.38.1.01]
2022-04-18 05:13:52,821+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-repository-conda [3.38.1.01]
2022-04-18 05:13:52,830+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-repository-conan [3.38.1.01]
2022-04-18 05:13:52,875+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-repository-conan [3.38.1.01]
2022-04-18 05:13:52,890+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-restore-conan [3.38.1.01]
2022-04-18 05:13:52,897+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-restore-conan [3.38.1.01]
2022-04-18 05:13:52,911+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-repository-r [3.38.1.01]
2022-04-18 05:13:52,953+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-repository-r [3.38.1.01]
2022-04-18 05:13:52,962+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-restore-r [3.38.1.01]
2022-04-18 05:13:52,972+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-restore-r [3.38.1.01]
2022-04-18 05:13:52,981+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-repository-cocoapods [3.38.1.01]
2022-04-18 05:13:53,010+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-repository-cocoapods [3.38.1.01]
2022-04-18 05:13:53,018+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-restore-rubygems [3.38.1.01]
2022-04-18 05:13:53,029+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-restore-rubygems [3.38.1.01]
2022-04-18 05:13:53,036+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-repository-golang [3.38.1.01]
2022-04-18 05:13:53,067+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-repository-golang [3.38.1.01]
2022-04-18 05:13:53,075+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-repository-p2 [3.38.1.01]
2022-04-18 05:13:53,110+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-repository-p2 [3.38.1.01]
2022-04-18 05:13:53,117+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-restore-p2 [3.38.1.01]
2022-04-18 05:13:53,127+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-restore-p2 [3.38.1.01]
2022-04-18 05:13:53,136+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-repository-bower [3.38.1.01]
2022-04-18 05:13:53,175+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-repository-bower [3.38.1.01]
2022-04-18 05:13:53,208+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATING com.sonatype.nexus.plugins.nexus-ossindex-plugin [3.38.1.01]
2022-04-18 05:13:53,225+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusBundleTracker - ACTIVATED com.sonatype.nexus.plugins.nexus-ossindex-plugin [3.38.1.01]
2022-04-18 05:13:53,247+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusLifecycleManager - Start STORAGE
2022-04-18 05:13:53,260+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.internal.node.orient.OrientLocalNodeAccess - ID: 7976B580-7B850278-52C5F500-0E9A4D4D-C21B8899
2022-04-18 05:13:53,511+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.internal.orient.DatabaseServerImpl - OrientDB version: 2.2.36 (build d3beb772c02098ceaea89779a7afd4b7305d3788, branch 2.2.x)
2022-04-18 05:13:53,524+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.internal.orient.DatabaseServerImpl$1 - OrientDB Server v2.2.36 (build d3beb772c02098ceaea89779a7afd4b7305d3788, branch 2.2.x) is starting up...
2022-04-18 05:13:53,527+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.internal.orient.DatabaseServerImpl$1 - Databases directory: /nexus-data/db
2022-04-18 05:13:53,726+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.engine.OMemoryAndLocalPaginatedEnginesInitializer - Configuration of usage of soft references inside of containers of results of SQL execution
2022-04-18 05:13:53,727+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.engine.OMemoryAndLocalPaginatedEnginesInitializer - Initial and maximum values of heap memory usage are equal, containers of results of SQL executors will use soft references by default
2022-04-18 05:13:53,727+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.engine.OMemoryAndLocalPaginatedEnginesInitializer - Auto configuration of disk cache size.
2022-04-18 05:13:53,768+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.common.jna.ONative - 16641896448 B/15870 MB/15 GB of physical memory were detected on machine
2022-04-18 05:13:53,777+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.common.jna.ONative - Soft memory limit for this process is set to -1 B/-1 MB/-1 GB
2022-04-18 05:13:53,778+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.common.jna.ONative - Hard memory limit for this process is set to -1 B/-1 MB/-1 GB
2022-04-18 05:13:53,778+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.common.jna.ONative - Path to 'memory' cgroup is '/docker/cc490e2a6526876365364a30e3f839f223dd1aa3e3fc13d0db5e713dad8cd1b7'
2022-04-18 05:13:53,779+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.common.jna.ONative - Mounting path for memory cgroup controller is '/sys/fs/cgroup/memory'
2022-04-18 05:13:53,779+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.common.jna.ONative - Can not find '/sys/fs/cgroup/memory/docker/cc490e2a6526876365364a30e3f839f223dd1aa3e3fc13d0db5e713dad8cd1b7' path for memory cgroup, it is supposed that process is running in container, will try to read root '/sys/fs/cgroup/memory' memory cgroup data
2022-04-18 05:13:53,779+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.common.jna.ONative - cgroup soft memory limit is 9223372036854771712 B/8796093022207 MB/8589934591 GB
2022-04-18 05:13:53,781+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.common.jna.ONative - cgroup hard memory limit is 9223372036854771712 B/8796093022207 MB/8589934591 GB
2022-04-18 05:13:53,781+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.common.jna.ONative - Detected memory limit for current process is 16641896448 B/15870 MB/15 GB
2022-04-18 05:13:53,783+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.engine.OMemoryAndLocalPaginatedEnginesInitializer - OrientDB auto-config DISKCACHE=2,703MB (heap=2,404MB direct=2,703MB os=15,870MB)
2022-04-18 05:13:53,787+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.config.OGlobalConfiguration - Lowering disk cache size from 2,703MB to 2,701MB.
2022-04-18 05:13:53,849+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.internal.orient.DatabaseServerImpl$1 - Found ORIENTDB_ROOT_PASSWORD variable, using this value as root's password
2022-04-18 05:13:54,157+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.server.handler.OJMXPlugin - JMX plugin installed and active: profilerManaged=true
2022-04-18 05:13:54,158+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.internal.orient.DatabaseServerImpl$1 - OrientDB Studio available at $ANSI{blue http://localhost:2480/studio/index.html}
2022-04-18 05:13:54,158+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.internal.orient.DatabaseServerImpl$1 - $ANSI{green:italic OrientDB Server is active} v2.2.36 (build d3beb772c02098ceaea89779a7afd4b7305d3788, branch 2.2.x).
2022-04-18 05:13:54,159+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.internal.orient.DatabaseServerImpl - Activated
2022-04-18 05:13:54,268+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusLifecycleManager - Start RESTORE
2022-04-18 05:13:54,528+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - Storage 'plocal:/nexus-data/db/component' is opened under OrientDB distribution : 2.2.36 (build d3beb772c02098ceaea89779a7afd4b7305d3788, branch 2.2.x)
2022-04-18 05:13:54,555+0000 INFO [ForkJoinPool.commonPool-worker-1] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - Storage 'plocal:/nexus-data/db/config' is opened under OrientDB distribution : 2.2.36 (build d3beb772c02098ceaea89779a7afd4b7305d3788, branch 2.2.x)
2022-04-18 05:13:55,084+0000 INFO [ForkJoinPool.commonPool-worker-2] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - Storage 'plocal:/nexus-data/db/security' is opened under OrientDB distribution : 2.2.36 (build d3beb772c02098ceaea89779a7afd4b7305d3788, branch 2.2.x)
2022-04-18 05:13:55,375+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusLifecycleManager - Start UPGRADE
2022-04-18 05:13:55,495+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusLifecycleManager - Start SCHEMAS
2022-04-18 05:13:55,513+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.internal.orient.DatabaseManagerImpl - Configuring OrientDB pool config with per-core limit of 16
2022-04-18 05:13:55,537+0000 ERROR [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - Exception 75830E7C in storage plocal:/nexus-data/db/security: 2.2.36 (build d3beb772c02098ceaea89779a7afd4b7305d3788, branch 2.2.x)
com.orientechnologies.orient.core.exception.OStorageException: File with name 'realm.cpm' does not exist in storage 'security'
DB name="security"
at com.orientechnologies.orient.core.storage.cache.local.OWOWCache.loadFile(OWOWCache.java:475)
at com.orientechnologies.orient.core.storage.impl.local.paginated.base.ODurableComponent.openFile(ODurableComponent.java:180)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OClusterPositionMap.open(OClusterPositionMap.java:55)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OPaginatedCluster.open(OPaginatedCluster.java:227)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.addClusterInternal(OAbstractPaginatedStorage.java:4368)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.doAddCluster(OAbstractPaginatedStorage.java:4347)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.addCluster(OAbstractPaginatedStorage.java:681)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.addCluster(ODatabaseDocumentTx.java:1380)
at com.orientechnologies.orient.core.metadata.schema.OSchemaShared.createClusters(OSchemaShared.java:1235)
at com.orientechnologies.orient.core.metadata.schema.OSchemaShared.doCreateClass(OSchemaShared.java:1125)
at com.orientechnologies.orient.core.metadata.schema.OSchemaShared.createClass(OSchemaShared.java:403)
at com.orientechnologies.orient.core.metadata.schema.OSchemaProxy.createClass(OSchemaProxy.java:218)
at org.sonatype.nexus.orient.entity.EntityAdapter.register(EntityAdapter.java:172)
at org.sonatype.nexus.orient.entity.EntityAdapter.register(EntityAdapter.java:203)
at org.sonatype.nexus.internal.security.realm.orient.OrientRealmConfigurationStore.doStart(OrientRealmConfigurationStore.java:67)
at org.sonatype.nexus.common.stateguard.StateGuardLifecycleSupport.start(StateGuardLifecycleSupport.java:69)
at org.sonatype.nexus.internal.security.realm.orient.OrientRealmConfigurationStore$$EnhancerByGuice$$268056521.GUICE$TRAMPOLINE()
at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:74)
at org.sonatype.nexus.common.stateguard.MethodInvocationAction.run(MethodInvocationAction.java:39)
at org.sonatype.nexus.common.stateguard.StateGuard$TransitionImpl.run(StateGuard.java:193)
at org.sonatype.nexus.common.stateguard.TransitionsInterceptor.invoke(TransitionsInterceptor.java:57)
at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:75)
at com.google.inject.internal.InterceptorStackCallback.invoke(InterceptorStackCallback.java:55)
at org.sonatype.nexus.internal.security.realm.orient.OrientRealmConfigurationStore$$EnhancerByGuice$$268056521.start()
at org.sonatype.nexus.extender.NexusLifecycleManager.startComponent(NexusLifecycleManager.java:199)
at org.sonatype.nexus.extender.NexusLifecycleManager.to(NexusLifecycleManager.java:111)
at org.sonatype.nexus.extender.NexusContextListener.moveToPhase(NexusContextListener.java:319)
at org.sonatype.nexus.extender.NexusContextListener.frameworkEvent(NexusContextListener.java:216)
at org.apache.felix.framework.Felix.setActiveStartLevel(Felix.java:1597)
at org.apache.felix.framework.FrameworkStartLevelImpl.run(FrameworkStartLevelImpl.java:308)
at java.lang.Thread.run(Thread.java:748)
2022-04-18 05:13:55,555+0000 ERROR [FelixStartLevel] *SYSTEM org.sonatype.nexus.internal.security.realm.orient.OrientRealmConfigurationStore - Failed transition: NEW -> STARTED
com.orientechnologies.orient.core.exception.OStorageException: File with name 'realm.cpm' does not exist in storage 'security'
DB name="security"
at com.orientechnologies.orient.core.storage.cache.local.OWOWCache.loadFile(OWOWCache.java:475)
at com.orientechnologies.orient.core.storage.impl.local.paginated.base.ODurableComponent.openFile(ODurableComponent.java:180)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OClusterPositionMap.open(OClusterPositionMap.java:55)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OPaginatedCluster.open(OPaginatedCluster.java:227)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.addClusterInternal(OAbstractPaginatedStorage.java:4368)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.doAddCluster(OAbstractPaginatedStorage.java:4347)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.addCluster(OAbstractPaginatedStorage.java:681)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.addCluster(ODatabaseDocumentTx.java:1380)
at com.orientechnologies.orient.core.metadata.schema.OSchemaShared.createClusters(OSchemaShared.java:1235)
at com.orientechnologies.orient.core.metadata.schema.OSchemaShared.doCreateClass(OSchemaShared.java:1125)
at com.orientechnologies.orient.core.metadata.schema.OSchemaShared.createClass(OSchemaShared.java:403)
at com.orientechnologies.orient.core.metadata.schema.OSchemaProxy.createClass(OSchemaProxy.java:218)
at org.sonatype.nexus.orient.entity.EntityAdapter.register(EntityAdapter.java:172)
at org.sonatype.nexus.orient.entity.EntityAdapter.register(EntityAdapter.java:203)
at org.sonatype.nexus.internal.security.realm.orient.OrientRealmConfigurationStore.doStart(OrientRealmConfigurationStore.java:67)
at org.sonatype.nexus.common.stateguard.StateGuardLifecycleSupport.start(StateGuardLifecycleSupport.java:69)
at org.sonatype.nexus.common.stateguard.MethodInvocationAction.run(MethodInvocationAction.java:39)
at org.sonatype.nexus.common.stateguard.StateGuard$TransitionImpl.run(StateGuard.java:193)
at org.sonatype.nexus.common.stateguard.TransitionsInterceptor.invoke(TransitionsInterceptor.java:57)
at org.sonatype.nexus.extender.NexusLifecycleManager.startComponent(NexusLifecycleManager.java:199)
at org.sonatype.nexus.extender.NexusLifecycleManager.to(NexusLifecycleManager.java:111)
at org.sonatype.nexus.extender.NexusContextListener.moveToPhase(NexusContextListener.java:319)
at org.sonatype.nexus.extender.NexusContextListener.frameworkEvent(NexusContextListener.java:216)
at org.apache.felix.framework.Felix.setActiveStartLevel(Felix.java:1597)
at org.apache.felix.framework.FrameworkStartLevelImpl.run(FrameworkStartLevelImpl.java:308)
at java.lang.Thread.run(Thread.java:748)
2022-04-18 05:13:55,558+0000 ERROR [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusContextListener - Failed to start nexus
com.orientechnologies.orient.core.exception.OStorageException: File with name 'realm.cpm' does not exist in storage 'security'
DB name="security"
at com.orientechnologies.orient.core.storage.cache.local.OWOWCache.loadFile(OWOWCache.java:475)
at com.orientechnologies.orient.core.storage.impl.local.paginated.base.ODurableComponent.openFile(ODurableComponent.java:180)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OClusterPositionMap.open(OClusterPositionMap.java:55)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OPaginatedCluster.open(OPaginatedCluster.java:227)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.addClusterInternal(OAbstractPaginatedStorage.java:4368)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.doAddCluster(OAbstractPaginatedStorage.java:4347)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.addCluster(OAbstractPaginatedStorage.java:681)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.addCluster(ODatabaseDocumentTx.java:1380)
at com.orientechnologies.orient.core.metadata.schema.OSchemaShared.createClusters(OSchemaShared.java:1235)
at com.orientechnologies.orient.core.metadata.schema.OSchemaShared.doCreateClass(OSchemaShared.java:1125)
at com.orientechnologies.orient.core.metadata.schema.OSchemaShared.createClass(OSchemaShared.java:403)
at com.orientechnologies.orient.core.metadata.schema.OSchemaProxy.createClass(OSchemaProxy.java:218)
at org.sonatype.nexus.orient.entity.EntityAdapter.register(EntityAdapter.java:172)
at org.sonatype.nexus.orient.entity.EntityAdapter.register(EntityAdapter.java:203)
at org.sonatype.nexus.internal.security.realm.orient.OrientRealmConfigurationStore.doStart(OrientRealmConfigurationStore.java:67)
at org.sonatype.nexus.common.stateguard.StateGuardLifecycleSupport.start(StateGuardLifecycleSupport.java:69)
at org.sonatype.nexus.common.stateguard.MethodInvocationAction.run(MethodInvocationAction.java:39)
at org.sonatype.nexus.common.stateguard.StateGuard$TransitionImpl.run(StateGuard.java:193)
at org.sonatype.nexus.common.stateguard.TransitionsInterceptor.invoke(TransitionsInterceptor.java:57)
at org.sonatype.nexus.extender.NexusLifecycleManager.startComponent(NexusLifecycleManager.java:199)
at org.sonatype.nexus.extender.NexusLifecycleManager.to(NexusLifecycleManager.java:111)
at org.sonatype.nexus.extender.NexusContextListener.moveToPhase(NexusContextListener.java:319)
at org.sonatype.nexus.extender.NexusContextListener.frameworkEvent(NexusContextListener.java:216)
at org.apache.felix.framework.Felix.setActiveStartLevel(Felix.java:1597)
at org.apache.felix.framework.FrameworkStartLevelImpl.run(FrameworkStartLevelImpl.java:308)
at java.lang.Thread.run(Thread.java:748)
2022-04-18 05:13:55,562+0000 ERROR [FelixStartLevel] *SYSTEM Felix - Framework listener delivery error.
com.orientechnologies.orient.core.exception.OStorageException: File with name 'realm.cpm' does not exist in storage 'security'
DB name="security"
at com.orientechnologies.orient.core.storage.cache.local.OWOWCache.loadFile(OWOWCache.java:475)
at com.orientechnologies.orient.core.storage.impl.local.paginated.base.ODurableComponent.openFile(ODurableComponent.java:180)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OClusterPositionMap.open(OClusterPositionMap.java:55)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OPaginatedCluster.open(OPaginatedCluster.java:227)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.addClusterInternal(OAbstractPaginatedStorage.java:4368)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.doAddCluster(OAbstractPaginatedStorage.java:4347)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.addCluster(OAbstractPaginatedStorage.java:681)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.addCluster(ODatabaseDocumentTx.java:1380)
at com.orientechnologies.orient.core.metadata.schema.OSchemaShared.createClusters(OSchemaShared.java:1235)
at com.orientechnologies.orient.core.metadata.schema.OSchemaShared.doCreateClass(OSchemaShared.java:1125)
at com.orientechnologies.orient.core.metadata.schema.OSchemaShared.createClass(OSchemaShared.java:403)
at com.orientechnologies.orient.core.metadata.schema.OSchemaProxy.createClass(OSchemaProxy.java:218)
at org.sonatype.nexus.orient.entity.EntityAdapter.register(EntityAdapter.java:172)
at org.sonatype.nexus.orient.entity.EntityAdapter.register(EntityAdapter.java:203)
at org.sonatype.nexus.internal.security.realm.orient.OrientRealmConfigurationStore.doStart(OrientRealmConfigurationStore.java:67)
at org.sonatype.nexus.common.stateguard.StateGuardLifecycleSupport.start(StateGuardLifecycleSupport.java:69)
at org.sonatype.nexus.common.stateguard.MethodInvocationAction.run(MethodInvocationAction.java:39)
at org.sonatype.nexus.common.stateguard.StateGuard$TransitionImpl.run(StateGuard.java:193)
at org.sonatype.nexus.common.stateguard.TransitionsInterceptor.invoke(TransitionsInterceptor.java:57)
at org.sonatype.nexus.extender.NexusLifecycleManager.startComponent(NexusLifecycleManager.java:199)
at org.sonatype.nexus.extender.NexusLifecycleManager.to(NexusLifecycleManager.java:111)
at org.sonatype.nexus.extender.NexusContextListener.moveToPhase(NexusContextListener.java:319)
at org.sonatype.nexus.extender.NexusContextListener.frameworkEvent(NexusContextListener.java:216)
at org.apache.felix.framework.Felix.setActiveStartLevel(Felix.java:1597)
at org.apache.felix.framework.FrameworkStartLevelImpl.run(FrameworkStartLevelImpl.java:308)
at java.lang.Thread.run(Thread.java:748)
2022-04-18 05:13:55,599+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusContextListener - Uptime: 30 seconds and 163 milliseconds (nexus-oss-edition/3.38.1.01)
2022-04-18 05:13:55,599+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusLifecycleManager - Shutting down
2022-04-18 05:13:55,599+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusLifecycleManager - Stop UPGRADE
2022-04-18 05:13:55,600+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusLifecycleManager - Stop RESTORE
2022-04-18 05:13:55,600+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusLifecycleManager - Stop STORAGE
2022-04-18 05:13:55,615+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.internal.orient.DatabaseManagerImpl - Stopping 1 pools
2022-04-18 05:13:55,616+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.internal.orient.DatabaseManagerImpl - Stopping pool: config
2022-04-18 05:13:55,616+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.internal.orient.DatabaseServerImpl$1 - OrientDB Server is shutting down...
2022-04-18 05:13:55,616+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.internal.orient.DatabaseServerImpl$1 - Shutting down protocols
2022-04-18 05:13:55,616+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.server.plugin.OServerPluginManager - Shutting down plugins:
2022-04-18 05:13:55,617+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.server.plugin.OServerPluginManager - - jmx
2022-04-18 05:13:55,617+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.internal.orient.DatabaseServerImpl$1 - OrientDB Server shutdown complete
2022-04-18 05:13:55,618+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.Orient - Orient Engine is shutting down...
2022-04-18 05:13:55,629+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.Orient - - shutdown storage: component...
2022-04-18 05:13:55,755+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.Orient - - shutdown storage: security...
2022-04-18 05:13:55,961+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.Orient - - shutdown storage: config...
2022-04-18 05:13:56,042+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.Orient - - shutdown storage: OSystem...
2022-04-18 05:13:56,416+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.Orient - OrientDB Engine shutdown complete
2022-04-18 05:13:56,416+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.internal.orient.DatabaseServerImpl - Shutdown
2022-04-18 05:13:56,417+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusLifecycleManager - Stop KERNEL

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests