Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chown error on bind mount when trying to launch postgres via docker compose #1209

Open
valouille opened this issue Jan 7, 2022 · 54 comments
Assignees
Labels
area/volume Access to host volumes from inside the VM or containers kind/bug Something isn't working platform/macos triage/need-to-repro Needs to be reproduced by dev team triage/next-candidate Discuss if it should be moved to "Next" milestone

Comments

@valouille
Copy link

Rancher Desktop Version

0.7.1

Rancher Desktop K8s Version

1.22.5

What operating system are you using?

macOS

Operating System / Build Version

macOS Monterey 12.1

What CPU architecture are you using?

arm64 (Apple Silicon)

Windows User Only

No response

Actual Behavior

When trying to launch a Postgres container with a bind mount, it doesn't work because of a chown related error to the folder at startup

Steps to Reproduce

Clone the repo https://github.com/docker/awesome-compose, go to the folder nginx-golang-postgres, edit the file docker-compose.yml to use a bind mount like the following

services:
  backend:
    build: backend
    secrets:
      - db-password
    depends_on:
      - db
  db:
    image: postgres
    restart: always
    secrets:
      - db-password
    volumes:
      - $PWD/db-data:/var/lib/postgresql/data
    environment:
      - POSTGRES_DB=example
      - POSTGRES_PASSWORD_FILE=/run/secrets/db-password
    expose:
      - 5432

  proxy:
    build: proxy
    ports:
      - 8000:8000
    depends_on:
      - backend
#volumes:
#  db-data:
secrets:
  db-password:
    file: db/password.txt

Run the following command: docker compose up

Result

Error response from daemon: error while creating mount source path '~/github.com/docker/awesome-compose/nginx-golang-postgres/db-data': chown ~/github.com/docker/awesome-compose/nginx-golang-postgres/db-data: permission denied

Expected Behavior

I would expect to be able to access locally the folder as a bind mount in order to access and modify the files directly

Additional Information

No response

@valouille valouille added the kind/bug Something isn't working label Jan 7, 2022
@jandubois jandubois added this to the v1.0.0 milestone Jan 12, 2022
@guild-jonathan-kaczynski
Copy link

guild-jonathan-kaczynski commented Jan 24, 2022

Rancher Desktop Version

0.7.1 and 1.0.0-beta.1

Rancher Desktop K8s Version

1.23.1 (latest), using the dockerd (moby) container runtime

What operating system are you using?

macOS

Operating System / Build Version

macOS Catalina 10.15.7 (19H1615)

What CPU architecture are you using?

x86-64 (Intel)

Windows User Only

No response

Actual Behavior

Trying to chown a folder mounted as a volume, from inside the container fails with a permission error.

Steps to Reproduce

Here is some of the output from running the endpoint script manually, if it helps any.

$ mkdir ./postgres

$ ls -ld ./postgres
drwxr-xr-x  2 jonathankaczynski  staff  64 Jan 24 13:38 ./postgres

$ docker run --rm -it \
    --entrypoint /bin/bash \
    -v "$(pwd)/postgres:/var/lib/postgresql/data" \
    postgres
root@98a1f91309fc:/# bash -x /usr/local/bin/docker-entrypoint.sh postgres
… snip …
+ docker_create_db_directories
+ local user
++ id -u
+ user=0
+ mkdir -p /var/lib/postgresql/data
+ chmod 700 /var/lib/postgresql/data
+ mkdir -p /var/run/postgresql
+ chmod 775 /var/run/postgresql
+ '[' -n '' ']'
+ '[' 0 = 0 ']'
+ find /var/lib/postgresql/data '!' -user postgres -exec chown postgres '{}' +
root@ff5dae1c266d:/# find /var/lib/postgresql/data '!' -user postgres
/var/lib/postgresql/data

root@ff5dae1c266d:/# ls -ld /var/lib/postgresql/data
drwx------ 1 501 dialout 64 Jan 24 18:32 /var/lib/postgresql/data

root@ff5dae1c266d:/# exit
exit
$ ls -ld ./postgres
drwx------  2 jonathankaczynski  staff  64 Jan 24 13:32 ./postgres

@guild-jonathan-kaczynski
Copy link

guild-jonathan-kaczynski commented Jan 24, 2022

Here's a minimal test case derived from the above practical example.

There seem to be two potentially independent mounted volume errors.

The first error relates to the mounted volume not existing on the host os (macos) prior to running the docker command.

$ ls -ld ./foobar
ls: ./foobar: No such file or directory

$ docker run --rm -it -v "$(pwd)/foobar:/opt/foobar" debian:bullseye-slim
docker: Error response from daemon: error while creating mount source path '/Users/jonathankaczynski/foobar': chown /Users/jonathankaczynski/foobar: permission denied.

$ ls -ld ./foobar
drwxr-xr-x  2 jonathankaczynski  staff  64 Jan 24 14:50 ./foobar

$ docker run --rm -it -v "$(pwd)/foobar:/opt/foobar" debian:bullseye-slim

root@392147ccc4f5:/# exit
exit

The second error relates to attempting to change the ownership of the mount point within the docker container. Though changing the file mode succeeds.

$ mkdir ./foobar

$ docker run --rm -it -v "$(pwd)/foobar:/opt/foobar" debian:bullseye-slim

root@cc1c034865bd:/# ls -ld /opt/foobar
drwxr-xr-x 1 501 dialout 64 Jan 24 19:50 /opt/foobar

root@cc1c034865bd:/# groupadd -r postgres --gid=999

root@cc1c034865bd:/# useradd -r -g postgres --uid=999 postgres

root@cc1c034865bd:/# chown postgres /opt/foobar
chown: changing ownership of '/opt/foobar': Permission denied

root@cc1c034865bd:/# chmod 700 /opt/foobar

root@cc1c034865bd:/# exit
exit

$ ls -ld ./foobar
drwx------  2 jonathankaczynski  staff  64 Jan 24 14:50 ./foobar

@gaktive gaktive modified the milestones: v1.0.0, v1.0.1 Jan 24, 2022
@luiz290788
Copy link

I'm having the same problem, this is the only thing holding me from using Rancher Desktop. Any progress here?

@jeesmon
Copy link

jeesmon commented Jan 28, 2022

MacOS, RD v1.0.0

Getting permission error when running postgres image with bind-mount (-v $(pwd):/var/lib/pgsql/data)

docker run --rm --name postgresql -e POSTGRESQL_DATABASE=my-db -e POSTGRESQL_USER=user -e POSTGRESQL_PASSWORD=pass -p 5432:5432 -v $(pwd):/var/lib/pgsql/data centos/postgresql-96-centos7

The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/pgsql/data/userdata ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... FATAL:  could not read block 2 in file "base/1/1255_fsm": read only 0 of 8192 bytes
PANIC:  cannot abort transaction 1, it was already committed
child process was terminated by signal 6: Aborted
initdb: removing contents of data directory "/var/lib/pgsql/data/userdata"

But if I use named volume (-v postgres-data:/var/lib/pgsql/data), it works fine

docker run --rm --name postgresql -e POSTGRESQL_DATABASE=my-db -e POSTGRESQL_USER=user -e POSTGRESQL_PASSWORD=pass -p 5432:5432 -v postgres-data:/var/lib/pgsql/data centos/postgresql-96-centos7

The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/pgsql/data/userdata ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

Success. You can now start the database server using:

    pg_ctl -D /var/lib/pgsql/data/userdata -l logfile start


WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
waiting for server to start....LOG:  redirecting log output to logging collector process
HINT:  Future log output will appear in directory "pg_log".
 done
server started
/var/run/postgresql:5432 - accepting connections
=> sourcing /usr/share/container-scripts/postgresql/start/set_passwords.sh ...
ALTER ROLE
waiting for server to shut down.... done
server stopped
Starting server...
LOG:  redirecting log output to logging collector process
HINT:  Future log output will appear in directory "pg_log".

docker volume list
DRIVER    VOLUME NAME
local     postgres-data

@petersondrew
Copy link

petersondrew commented Jan 31, 2022

I think this may be an issue with the underlying lima. I experience the exact same behavior using either Rancher Desktop or colima.

Possibly related lima-vm/lima#504

@guild-jonathan-kaczynski

They converted that issue into a discussion lima-vm/lima#505 marked it as answered.

sshfs isn't robust (and fast) enough to be used as /var/lib/postgresql.
Please use lima nerdctl volume create to create a named volume inside the guest ext4 filesystem.

To me, it doesn't feel like the answer is addressing the broader concerns raised above.

@guild-jonathan-kaczynski

There's also this earlier issue thread: lima-vm/lima#231

The last comment, which is from Dec, was:

The plan is to use mapped-xattr or mapped-file of virtio 9P, but the patch is not merged for macOS hosts yet, and seems to need more testers: NixOS/nixpkgs#122420

@willcohen
Copy link

As a followup, the latest version of the patch is https://gitlab.com/wwcohen/qemu/-/tree/9p-darwin and that's where the in-progress work will go as it progresses towards resubmission upstream. Any comments on how to improve would be GREATLY welcomed before I submit again.

@guild-jonathan-kaczynski

From NixOS/nixpkgs#122420, it looks like good progress has been made:

9p-darwin has been merged upstream

I'm also going to close this PR in favor of NixOS/nixpkgs#162243, at this point. I think it's okay if any additional final discussion still happens here since this particular issue has been referenced in so many places, but the work is now done!

@willcohen
Copy link

Please let me know if you have any questions!

@gaktive gaktive modified the milestones: Next, Later Mar 15, 2022
@gunamata gunamata modified the milestones: Next, Later Apr 12, 2022
@dennisdaotvlk
Copy link

Facing the same error with docker-compose and docker run -v

Uploading image.png…

@jandubois jandubois modified the milestones: Next, Later May 20, 2022
@marnen
Copy link

marnen commented Jun 13, 2022

Still an issue with Rancher Desktop 1.4.1, exactly as described above. This is the only thing preventing me from using Rancher as opposed to Docker.

@jandubois jandubois added the triage/next-candidate Discuss if it should be moved to "Next" milestone label Jul 30, 2023
@sourcecodemage
Copy link

Good call. I tried it and I got "failed: Too many levels of symbolic links (40)" , so it won't work for my use case. I'll try the colima method.

@sourcecodemage
Copy link

fwiw, vz is available in 1.9.1:

I found and enabled those settings late yesterday. RD said it needed to restart afterwards, so I stopped and started it.

12+ hours later, it still says "starting". I'll try rebooting my workstation and see how things go.

@jsoref
Copy link
Contributor

jsoref commented Aug 15, 2023

I tripped on something like that, I can't remember what my problem was, visit Slack (see https://rancherdesktop.io/) and ask for help.

@fivestones
Copy link

I'm also having the same problem as the OP, running Postgres in docker-compose with a bind mount running on Apple silicon M2. I get that same error. I'm running RD 1.10.0.

It looks like there's at least one workaround but that it might cause a bunch of trouble with other containers using simlinks.

I'm happy to provide more information if it can help someone debug this. I unfortunately don't know where to start to debug/fix it myself.

@santoshborse
Copy link

This solved my issue, thanks for posting

You can include the following content in ~/Library/Application\ Support/rancher-desktop/lima/_config/override.yaml:

mountType: 9p
mounts:
  - location: "~"
    9p:
      securityModel: mapped-xattr
      cache: "mmap"

It should allow this to work (you must restart Rancher Desktop to apply this setting).

Caveats: any symlinks on your host system will be seen as the referenced object in the VM/container. If there's a symlink loop, and something tries to follow it, it'll eat its own tail (potentially slowly depending on how things behave).

The databases I'm playing w/ (postgres, redis, neo4j) don't generally deal in symlinks, so I believe it's a satisfactory configuration for my database use cases. (It may create a mess for all of my other use cases, but that remains to be seen.)

@chrisdaly3
Copy link

You can include the following content in ~/Library/Application\ Support/rancher-desktop/lima/_config/override.yaml:

mountType: 9p
mounts:
  - location: "~"
    9p:
      securityModel: mapped-xattr
      cache: "mmap"

It should allow this to work (you must restart Rancher Desktop to apply this setting).

Caveats: any symlinks on your host system will be seen as the referenced object in the VM/container. If there's a symlink loop, and something tries to follow it, it'll eat its own tail (potentially slowly depending on how things behave).

The databases I'm playing w/ (postgres, redis, neo4j) don't generally deal in symlinks, so I believe it's a satisfactory configuration for my database use cases. (It may create a mess for all of my other use cases, but that remains to be seen.)

Just chiming in in 2024, M1 Mac running docker-compose with a Mongo container, confirming this has resolved the build issue for now. Still to be determined whether or not any weird "side effects" pop up. Hopefully a native fix gets rolled soon.

@jandubois jandubois added the area/volume Access to host volumes from inside the VM or containers label Jan 23, 2024
@gigi888
Copy link

gigi888 commented Jan 29, 2024

this works for me, https://stackoverflow.com/a/77803515/1183542
what was confusing at the beginning to me is I didn't install lima explicitly. It is included in RD

@CoderChang65535
Copy link

After my Mac upgrade to 14.3.1, former projects failed with this error:
chown: changing ownership of '/var/lib/mysql/xxx': Permission denied
I set MySQL files as volume
volumes: - ./docker/mysql/data:/var/lib/mysql -

@muhramadhan
Copy link

current workaround for me is to set VZ as emulation in preference.
so far no downside for my usecase

@galusben
Copy link

I was able to resolve the issue on mac with the following setup:

image (10)
image (9)

I also allow Rancher Desktop to use admin permission and disabled Traefik.

@llaszkie
Copy link

current workaround for me is to set VZ as emulation in preference. so far no downside for my usecase

... and "Volumes/Mount Type" to virtiofs at the same time. Disclaimer: I am also in testing phase for the setting :-)

pb-dod added a commit to ConsultingMD/homebrew-ih-public that referenced this issue Apr 11, 2024
…irtiofs (#90)

This PR addresses the Docker volume issues encountered by users on M2
and M3 Macs, as reported in this thread:
rancher-sandbox/rancher-desktop#1209

@cobbr2 just had a rough time with this one on his m2 mac. He was seeing
similar permission errors when trying to mount host volumes, preventing
services like PostgreSQL from starting correctly.

We were able to resolve the underlying issue by switching from the
default QEMU emulation to VZ (Virtualization.framework) and enabling
virtiofs for file sharing between the host and the guest VM.

The PR reintroduces the necessary configuration changes that were
previously removed, as these are still required for M2 and M3 Macs
running Rancher Desktop. This also adds the fix to prevent needing to
manually select virtiofs to get it working too.
@nothing2obvi
Copy link

VZ and virtiofs also fixed it for me.

@fonsitoubi
Copy link

fonsitoubi commented Jul 12, 2024

Same behavior on Mac M1.
With latest Rancher Desktop versions, after Ventura update it's not possible to use VZ (only QEMU), bc of the well known:

Error: /Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/lima/bin/limactl.ventura exited with code 1

'time="2024-07-12T13:57:42+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:57:45+02:00" level=info msg="[hostagent] 2024/07/12 13:57:45 tcpproxy: for incoming conn 127.0.0.1:64800, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:57:55+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:57:58+02:00" level=info msg="[hostagent] 2024/07/12 13:57:58 tcpproxy: for incoming conn 127.0.0.1:64861, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:58:08+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:58:11+02:00" level=info msg="[hostagent] 2024/07/12 13:58:11 tcpproxy: for incoming conn 127.0.0.1:64913, error'... 5805 more characters,
code: 1,
[Symbol(child-process.command)]: '/Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/lima/bin/limactl.ventura start --tty=false 0'
}


If going back to RD 1.11.1 this issue with chowning still occurs, and no VZ can be use either as it gets stuck starting vm with the progress bar loading infinitely.

`2024-07-12T11:52:05.190Z: > /Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/lima/bin/limactl.ventura list --json
{"name":"0","status":"Stopped","dir":"/Users/fonsito/Library/Application Support/rancher-desktop/lima/0","vmType":"vz","arch":"aarch64","cpuType":"","cpus":2,"memory":4294967296,"disk":107374182400,"network":[{"lima":"rancher-desktop-shared","macAddress":"52:55:55:1a:dd:d4","interface":"rd1"},{"lima":"rancher-desktop-bridged_en0","macAddress":"52:55:55:89:cb:f0","interface":"rd0"}],"sshLocalPort":53709,"sshConfigFile":"/Users/fonsito/Library/Application Support/rancher-desktop/lima/0/ssh.config","config":{"vmType":"vz","os":"Linux","arch":"aarch64","images":[{"location":"/Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/alpine-lima-v0.2.31.rd10-rd-3.18.0.iso","arch":"aarch64"}],"cpus":2,"memory":"4294967296","disk":"100GiB","mounts":[{"location":"","mountPoint":"","writable":true,"sshfs":{"cache":true,"followSymlinks":false,"sftpDriver":""},"9p":{"securityModel":"none","protocolVersion":"9p2000.L","msize":"128KiB","cache":"mmap"},"virtiofs":{}},{"location":"/tmp/rancher-desktop","mountPoint":"/tmp/rancher-desktop","writable":true,"sshfs":{"cache":true,"followSymlinks":false,"sftpDriver":""},"9p":{"securityModel":"none","protocolVersion":"9p2000.L","msize":"128KiB","cache":"mmap"},"virtiofs":{}},{"location":"/Volumes","mountPoint":"/Volumes","writable":true,"sshfs":{"cache":true,"followSymlinks":false,"sftpDriver":""},"9p":{"securityModel":"none","protocolVersion":"9p2000.L","msize":"128KiB","cache":"mmap"},"virtiofs":{}},{"location":"/var/folders","mountPoint":"/var/folders","writable":true,"sshfs":{"cache":true,"followSymlinks":false,"sftpDriver":""},"9p":{"securityModel":"none","protocolVersion":"9p2000.L","msize":"128KiB","cache":"mmap"},"virtiofs":{}},{"location":"/Applications/Rancher Desktop.app/Contents/Resources/resources","mountPoint":"/Applications/Rancher Desktop.app/Contents/Resources/resources","writable":true,"sshfs":{"cache":true,"followSymlinks":false,"sftpDriver":""},"9p":{"securityModel":"none","protocolVersion":"9p2000.L","msize":"128KiB","cache":"mmap"},"virtiofs":{}}],"mountType":"virtiofs","ssh":{"localPort":53709,"loadDotSSHPubKeys":false,"forwardAgent":false,"forwardX11":false,"forwardX11Trusted":false},"firmware":{"legacyBIOS":false},"audio":{"device":""},"video":{"display":"none","vnc":{}},"provision":[{"mode":"system","script":"#!/bin/sh\nset -o errexit -o nounset -o xtrace\nmkdir -p /bootfs\nmount --bind / /bootfs\n# /bootfs/etc is empty on first boot because it has been moved to /mnt/data/etc by lima\n\n# Workaround for https://github.com//issues/6051\n# should be removed when the issue is fixed in Lima itself\nif [ -f /bootfs/etc/network/interfaces ] && ! diff -q /etc/network/interfaces /bootfs/etc/network/interfaces; then\n cp /bootfs/etc/network/interfaces /etc/network/interfaces\n rc-service networking restart\nfi\nif [ -f /bootfs/etc/os-release ] && ! diff -q /etc/os-release /bootfs/etc/os-release; then\n cp /etc/machine-id /bootfs/etc\n cp /etc/ssh/ssh_host* /bootfs/etc/ssh/\n mkdir -p /etc/docker /etc/rancher\n cp -pr /etc/docker /bootfs/etc\n cp -pr /etc/rancher /bootfs/etc\n\n rm -rf /mnt/data/etc.prev\n mkdir /mnt/data/etc.prev\n mv /etc/* /mnt/data/etc.prev\n mv /bootfs/etc/* /etc\n\n # install updated files from /usr/local, e.g. nerdctl, buildkit, cni plugins\n cp -pr /bootfs/usr/local /usr\n\n # lima has applied changes while the "old" /etc was in place; restart to apply them to the updated one.\n reboot\nfi\numount /bootfs\nrmdir /bootfs\n"},{"mode":"system","script":"#!/bin/sh\nset -o errexit -o nounset -o xtrace\nfstrim /mnt/data\n"},{"mode":"system","script":"#!/bin/sh\nset -o errexit -o nounset -o xtrace\nsed -i -E 's/^#?MaxSessions +[0-9]+/MaxSessions 25/g' /etc/ssh/sshd_config\nrc-service --ifstarted sshd reload\n"},{"mode":"system","script":"#!/bin/sh\nset -o errexit -o nounset -o xtrace\nif ! [ -d /mnt/data/root ]; then\n mkdir -p /root\n mv /root /mnt/data/root\nfi\nmkdir -p /root\nmount --bind /mnt/data/root /root\n"},{"mode":"system","script":"#!/bin/sh\nset -o errexit -o nounset -o xtrace\nmkdir -p /etc/docker\n\n# Delete certs.d if it is a symlink (from previous boot).\n[ -L /etc/docker/certs.d ] && rm /etc/docker/certs.d\n\n# Create symlink if certs.d doesn't exist (user may have created a regular directory).\nif [ ! -e /etc/docker/certs.d ]; then\n # We don't know if the host is Linux or macOS, so we take a guess based on which mountpoint exists.\n if [ -d "/Users/${LIMA_CIDATA_USER}" ]; then\n ln -s "/Users/${LIMA_CIDATA_USER}/.docker/certs.d" /etc/docker\n elif [ -d "/home/${LIMA_CIDATA_USER}" ]; then\n ln -s "/home/${LIMA_CIDATA_USER}/.docker/certs.d" /etc/docker\n fi\nfi\n"},{"mode":"system","script":"#!/bin/sh\nhostname lima-rancher-desktop\n"},{"mode":"system","script":"#!/bin/sh\nset -o errexit -o nounset -o xtrace\n# During boot is the only safe time to delete old k3s versions.\nrm -rf /var/lib/rancher/k3s/data\n# Delete all tmp files older than 3 days.\nfind /tmp -depth -mtime +3 -delete\n"},{"mode":"system","script":"#!/bin/sh\nset -o errexit -o nounset -o xtrace\nfor dir in / /etc /tmp /var/lib; do\n mount --make-shared "${dir}"\ndone\n"},{"mode":"system","script":"#!/bin/sh\n# Move logrotate to hourly, because busybox crond only handles time jumps up\n# to one hour; this ensures that if the machine is suspended over long\n# periods, things will still happen often enough. This is idempotent.\nmv -n /etc/periodic/daily/logrotate /etc/periodic/hourly/\nrc-update add crond default\nrc-service crond start\n"},{"mode":"system","script":"set -o errexit -o nounset -o xtrace\nusermod --append --groups docker "${LIMA_CIDATA_USER}"\n"},{"mode":"system","script":"export CAROOT=/run/mkcert\nmkdir -p $CAROOT\ncd $CAROOT\nmkcert -install\nmkcert localhost\nchown -R nobody:nobody $CAROOT\n"},{"mode":"system","script":"set -o errexit -o nounset -o xtrace\n\n# openresty is backgrounding itself (and writes its own pid file)\nsed -i 's/^command_background/#command_background/' /etc/init.d/openresty\n\n# configure proxy only when allowed-images exists\naiListConf=/usr/local/openresty/nginx/conf/allowed-images.conf\n# Remove the reference to an obsolete image conf filename\noldIAListConf=/usr/local/openresty/nginx/conf/image-allow-list.conf\nsetproxy="[ -f $aiListConf ] && supervise_daemon_args=\"-e HTTPS_PROXY=http://127.0.0.1:3128 \$supervise_daemon_args\""\nfor svc in containerd docker; do\n sed -i "\#-f $aiListConf#d" /etc/init.d/$svc\n sed -i "\#-f $oldIAListConf#d" /etc/init.d/$svc\n sed -i "/^supervise_daemon_args/a $setproxy" /etc/init.d/$svc\ndone\n\n# Make sure openresty log directory exists\ninstall -d -m755 /var/log/openresty\n"},{"mode":"system","script":"#!/bin/sh\nset -o errexit\n\nmount bpffs -t bpf /sys/fs/bpf\nmount --make-shared /sys/fs/bpf\nmount --make-shared /sys/fs/cgroup\n"}],"containerd":{"system":false,"user":false,"archives":[{"location":"https://github.com/containerd/nerdctl/releases/download/v1.6.2/nerdctl-full-1.6.2-linux-amd64.tar.gz","arch":"x86_64","digest":"sha256:37678f27ad341a7c568c5064f62bcbe90cddec56e65f5d684edf8ca955c3e6a4"},{"location":"https://github.com/containerd/nerdctl/releases/download/v1.6.2/nerdctl-full-1.6.2-linux-arm64.tar.gz","arch":"aarch64","digest":"sha256:ea30ab544c057e3a0457194ecd273ffbce58067de534bdfaffe4edf3a4da6357"}]},"guestInstallPrefix":"/usr/local","portForwards":[{"guestIPMustBeZero":true,"guestIP":"0.0.0.0","guestPortRange":[1,65535],"hostIP":"0.0.0.0","hostPortRange":[1,65535],"proto":"tcp"},{"guestIP":"127.0.0.1","guestPortRange":[1,65535],"guestSocket":"/var/run/docker.sock","hostIP":"127.0.0.1","hostPortRange":[1,65535],"hostSocket":"/Users/fonsito/.rd/docker.sock","proto":"tcp"}],"networks":[{"lima":"rancher-desktop-shared","macAddress":"52:55:55:1a:dd:d4","interface":"rd1"},{"lima":"rancher-desktop-bridged_en0","macAddress":"52:55:55:89:cb:f0","interface":"rd0"}],"hostResolver":{"enabled":true,"ipv6":false,"hosts":{"host.docker.internal":"host.lima.internal","host.rancher-desktop.internal":"host.lima.internal","lima-rancher-desktop":"lima-0"}},"propagateProxyEnv":true,"caCerts":{"removeDefaults":false},"rosetta":{"enabled":false,"binfmt":false},"plain":false},"sshAddress":"127.0.0.1","protected":false,"HostOS":"darwin","HostArch":"aarch64","LimaHome":"/Users/fonsito/Library/Application Support/rancher-desktop/lima","IdentityFile":"/Users/fonsito/Library/Application Support/rancher-desktop/lima/_config/user"}

2024-07-12T12:02:05.464Z: > limactl start --tty=false 0
$ c [Error]: /Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/lima/bin/limactl.ventura exited with code 1
at ChildProcess. (/Applications/Rancher Desktop.app/Contents/Resources/app.asar/dist/app/background.js:2:138016)
at ChildProcess.emit (node:events:527:28)
at ChildProcess._handle.onexit (node:internal/child_process:291:12) {
command: [
'/Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/lima/bin/limactl.ventura',
'start',
'--tty=false',
'0'
],
stdout: '',
stderr: 'time="2024-07-12T13:52:05+02:00" level=info msg="Using the existing instance \"0\""\n' +
'time="2024-07-12T13:52:05+02:00" level=info msg="Starting socket_vmnet daemon for \"rancher-desktop-shared\" network"\n' +
'time="2024-07-12T13:52:05+02:00" level=info msg="Starting socket_vmnet daemon for \"rancher-desktop-bridged_en0\" network"\n' +
'time="2024-07-12T13:52:06+02:00" level=info msg="[hostagent] hostagent socket created at /Users/fonsito/Library/Application Support/rancher-desktop/lima/0/ha.sock"\n' +
'time="2024-07-12T13:52:06+02:00" level=info msg="[hostagent] Starting VZ (hint: to watch the boot progress, see \"/Users/fonsito/Library/Application Support/rancher-desktop/lima/0/serial*.log\")"\n' +
'time="2024-07-12T13:52:06+02:00" level=info msg="[hostagent] new connection from to "\n' +
'time="2024-07-12T13:52:07+02:00" level=info msg="SSH Local Port: 53709"\n' +
'time="2024-07-12T13:52:07+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:52:07+02:00" level=info msg="[hostagent] [VZ] - vm state change: running"\n' +
'time="2024-07-12T13:52:17+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:52:20+02:00" level=info msg="[hostagent] 2024/07/12 13:52:20 tcpproxy: for incoming conn 127.0.0.1:63284, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:52:27+02:00" level=error msg="[hostagent] dhcp: unhandled message type: RELEASE"\n' +
'time="2024-07-12T13:52:30+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:52:31+02:00" level=info msg="[hostagent] 2024/07/12 13:52:31 tcpproxy: for incoming conn 127.0.0.1:63338, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: connection was refused"\n' +
'time="2024-07-12T13:52:41+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:52:51+02:00" level=info msg="[hostagent] 2024/07/12 13:52:51 tcpproxy: for incoming conn 127.0.0.1:63387, error dialing \"192.168.5.15:22\": context deadline exceeded"\n' +
'time="2024-07-12T13:53:01+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:53:11+02:00" level=info msg="[hostagent] 2024/07/12 13:53:11 tcpproxy: for incoming conn 127.0.0.1:63475, error dialing \"192.168.5.15:22\": context deadline exceeded"\n' +
'time="2024-07-12T13:53:21+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:53:24+02:00" level=info msg="[hostagent] 2024/07/12 13:53:24 tcpproxy: for incoming conn 127.0.0.1:63570, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:53:34+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:53:37+02:00" level=info msg="[hostagent] 2024/07/12 13:53:37 tcpproxy: for incoming conn 127.0.0.1:63642, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:53:47+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:53:50+02:00" level=info msg="[hostagent] 2024/07/12 13:53:50 tcpproxy: for incoming conn 127.0.0.1:63704, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:54:00+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:54:03+02:00" level=info msg="[hostagent] 2024/07/12 13:54:03 tcpproxy: for incoming conn 127.0.0.1:63755, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:54:13+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:54:16+02:00" level=info msg="[hostagent] 2024/07/12 13:54:16 tcpproxy: for incoming conn 127.0.0.1:63809, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:54:26+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:54:29+02:00" level=info msg="[hostagent] 2024/07/12 13:54:29 tcpproxy: for incoming conn 127.0.0.1:63873, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:54:29+02:00" level=info msg="[hostagent] 2024/07/12 13:54:29 tcpproxy: for incoming conn 127.0.0.1:63866, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:54:39+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:54:42+02:00" level=info msg="[hostagent] 2024/07/12 13:54:42 tcpproxy: for incoming conn 127.0.0.1:63922, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:54:52+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:54:55+02:00" level=info msg="[hostagent] 2024/07/12 13:54:55 tcpproxy: for incoming conn 127.0.0.1:63986, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:55:05+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:55:08+02:00" level=info msg="[hostagent] 2024/07/12 13:55:08 tcpproxy: for incoming conn 127.0.0.1:64046, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:55:18+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:55:21+02:00" level=info msg="[hostagent] 2024/07/12 13:55:21 tcpproxy: for incoming conn 127.0.0.1:64109, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:55:31+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:55:34+02:00" level=info msg="[hostagent] 2024/07/12 13:55:34 tcpproxy: for incoming conn 127.0.0.1:64182, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:55:45+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:55:48+02:00" level=info msg="[hostagent] 2024/07/12 13:55:48 tcpproxy: for incoming conn 127.0.0.1:64232, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:55:58+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:56:01+02:00" level=info msg="[hostagent] 2024/07/12 13:56:01 tcpproxy: for incoming conn 127.0.0.1:64293, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:56:11+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:56:14+02:00" level=info msg="[hostagent] 2024/07/12 13:56:14 tcpproxy: for incoming conn 127.0.0.1:64361, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:56:24+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:56:27+02:00" level=info msg="[hostagent] 2024/07/12 13:56:27 tcpproxy: for incoming conn 127.0.0.1:64423, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:56:37+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:56:40+02:00" level=info msg="[hostagent] 2024/07/12 13:56:40 tcpproxy: for incoming conn 127.0.0.1:64481, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:56:50+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:56:53+02:00" level=info msg="[hostagent] 2024/07/12 13:56:53 tcpproxy: for incoming conn 127.0.0.1:64533, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:57:03+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:57:06+02:00" level=info msg="[hostagent] 2024/07/12 13:57:06 tcpproxy: for incoming conn 127.0.0.1:64591, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:57:16+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:57:19+02:00" level=info msg="[hostagent] 2024/07/12 13:57:19 tcpproxy: for incoming conn 127.0.0.1:64682, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:57:29+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:57:32+02:00" level=info msg="[hostagent] 2024/07/12 13:57:32 tcpproxy: for incoming conn 127.0.0.1:64735, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:57:42+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:57:45+02:00" level=info msg="[hostagent] 2024/07/12 13:57:45 tcpproxy: for incoming conn 127.0.0.1:64800, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:57:55+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:57:58+02:00" level=info msg="[hostagent] 2024/07/12 13:57:58 tcpproxy: for incoming conn 127.0.0.1:64861, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:58:08+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:58:11+02:00" level=info msg="[hostagent] 2024/07/12 13:58:11 tcpproxy: for incoming conn 127.0.0.1:64913, error'... 5805 more characters,
code: 1,
[Symbol(child-process.command)]: '/Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/lima/bin/limactl.ventura start --tty=false 0'
}
2024-07-12T12:02:05.489Z: Error starting lima: c [Error]: /Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/lima/bin/limactl.ventura exited with code 1
at ChildProcess. (/Applications/Rancher Desktop.app/Contents/Resources/app.asar/dist/app/background.js:2:138016)
at ChildProcess.emit (node:events:527:28)
at ChildProcess._handle.onexit (node:internal/child_process:291:12) {
command: [
'/Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/lima/bin/limactl.ventura',
'start',
'--tty=false',
'0'
],
stdout: '',
stderr: 'time="2024-07-12T13:52:05+02:00" level=info msg="Using the existing instance \"0\""\n' +
'time="2024-07-12T13:52:05+02:00" level=info msg="Starting socket_vmnet daemon for \"rancher-desktop-shared\" network"\n' +
'time="2024-07-12T13:52:05+02:00" level=info msg="Starting socket_vmnet daemon for \"rancher-desktop-bridged_en0\" network"\n' +
'time="2024-07-12T13:52:06+02:00" level=info msg="[hostagent] hostagent socket created at /Users/fonsito/Library/Application Support/rancher-desktop/lima/0/ha.sock"\n' +
'time="2024-07-12T13:52:06+02:00" level=info msg="[hostagent] Starting VZ (hint: to watch the boot progress, see \"/Users/fonsito/Library/Application Support/rancher-desktop/lima/0/serial*.log\")"\n' +
'time="2024-07-12T13:52:06+02:00" level=info msg="[hostagent] new connection from to "\n' +
'time="2024-07-12T13:52:07+02:00" level=info msg="SSH Local Port: 53709"\n' +
'time="2024-07-12T13:52:07+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:52:07+02:00" level=info msg="[hostagent] [VZ] - vm state change: running"\n' +
'time="2024-07-12T13:52:17+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:52:20+02:00" level=info msg="[hostagent] 2024/07/12 13:52:20 tcpproxy: for incoming conn 127.0.0.1:63284, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:52:27+02:00" level=error msg="[hostagent] dhcp: unhandled message type: RELEASE"\n' +
'time="2024-07-12T13:52:30+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:52:31+02:00" level=info msg="[hostagent] 2024/07/12 13:52:31 tcpproxy: for incoming conn 127.0.0.1:63338, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: connection was refused"\n' +
'time="2024-07-12T13:52:41+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:52:51+02:00" level=info msg="[hostagent] 2024/07/12 13:52:51 tcpproxy: for incoming conn 127.0.0.1:63387, error dialing \"192.168.5.15:22\": context deadline exceeded"\n' +
'time="2024-07-12T13:53:01+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:53:11+02:00" level=info msg="[hostagent] 2024/07/12 13:53:11 tcpproxy: for incoming conn 127.0.0.1:63475, error dialing \"192.168.5.15:22\": context deadline exceeded"\n' +
'time="2024-07-12T13:53:21+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:53:24+02:00" level=info msg="[hostagent] 2024/07/12 13:53:24 tcpproxy: for incoming conn 127.0.0.1:63570, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:53:34+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:53:37+02:00" level=info msg="[hostagent] 2024/07/12 13:53:37 tcpproxy: for incoming conn 127.0.0.1:63642, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:53:47+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:53:50+02:00" level=info msg="[hostagent] 2024/07/12 13:53:50 tcpproxy: for incoming conn 127.0.0.1:63704, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:54:00+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:54:03+02:00" level=info msg="[hostagent] 2024/07/12 13:54:03 tcpproxy: for incoming conn 127.0.0.1:63755, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:54:13+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:54:16+02:00" level=info msg="[hostagent] 2024/07/12 13:54:16 tcpproxy: for incoming conn 127.0.0.1:63809, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:54:26+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:54:29+02:00" level=info msg="[hostagent] 2024/07/12 13:54:29 tcpproxy: for incoming conn 127.0.0.1:63873, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:54:29+02:00" level=info msg="[hostagent] 2024/07/12 13:54:29 tcpproxy: for incoming conn 127.0.0.1:63866, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:54:39+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:54:42+02:00" level=info msg="[hostagent] 2024/07/12 13:54:42 tcpproxy: for incoming conn 127.0.0.1:63922, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:54:52+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:54:55+02:00" level=info msg="[hostagent] 2024/07/12 13:54:55 tcpproxy: for incoming conn 127.0.0.1:63986, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:55:05+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:55:08+02:00" level=info msg="[hostagent] 2024/07/12 13:55:08 tcpproxy: for incoming conn 127.0.0.1:64046, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:55:18+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:55:21+02:00" level=info msg="[hostagent] 2024/07/12 13:55:21 tcpproxy: for incoming conn 127.0.0.1:64109, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:55:31+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:55:34+02:00" level=info msg="[hostagent] 2024/07/12 13:55:34 tcpproxy: for incoming conn 127.0.0.1:64182, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:55:45+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:55:48+02:00" level=info msg="[hostagent] 2024/07/12 13:55:48 tcpproxy: for incoming conn 127.0.0.1:64232, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:55:58+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:56:01+02:00" level=info msg="[hostagent] 2024/07/12 13:56:01 tcpproxy: for incoming conn 127.0.0.1:64293, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:56:11+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:56:14+02:00" level=info msg="[hostagent] 2024/07/12 13:56:14 tcpproxy: for incoming conn 127.0.0.1:64361, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:56:24+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:56:27+02:00" level=info msg="[hostagent] 2024/07/12 13:56:27 tcpproxy: for incoming conn 127.0.0.1:64423, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:56:37+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:56:40+02:00" level=info msg="[hostagent] 2024/07/12 13:56:40 tcpproxy: for incoming conn 127.0.0.1:64481, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:56:50+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:56:53+02:00" level=info msg="[hostagent] 2024/07/12 13:56:53 tcpproxy: for incoming conn 127.0.0.1:64533, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:57:03+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:57:06+02:00" level=info msg="[hostagent] 2024/07/12 13:57:06 tcpproxy: for incoming conn 127.0.0.1:64591, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:57:16+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:57:19+02:00" level=info msg="[hostagent] 2024/07/12 13:57:19 tcpproxy: for incoming conn 127.0.0.1:64682, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:57:29+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:57:32+02:00" level=info msg="[hostagent] 2024/07/12 13:57:32 tcpproxy: for incoming conn 127.0.0.1:64735, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:57:42+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:57:45+02:00" level=info msg="[hostagent] 2024/07/12 13:57:45 tcpproxy: for incoming conn 127.0.0.1:64800, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:57:55+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:57:58+02:00" level=info msg="[hostagent] 2024/07/12 13:57:58 tcpproxy: for incoming conn 127.0.0.1:64861, error dialing \"192.168.5.15:22\": connect tcp 192.168.5.15:22: no route to host"\n' +
'time="2024-07-12T13:58:08+02:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 3: \"ssh\""\n' +
'time="2024-07-12T13:58:11+02:00" level=info msg="[hostagent] 2024/07/12 13:58:11 tcpproxy: for incoming conn 127.0.0.1:64913, error'... 5805 more characters,
code: 1,
[Symbol(child-process.command)]: '/Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/lima/bin/limactl.ventura start --tty=false 0'
}
`

@micheltlutz-compass
Copy link

I was able to resolve the issue on mac with the following setup:

image (10) image (9)

I also allow Rancher Desktop to use admin permission and disabled Traefik.

Work for me macOS 14.7 (23H124)

@fterra-encora
Copy link

I was able to resolve the issue on mac with the following setup:
image (10) image (9)
I also allow Rancher Desktop to use admin permission and disabled Traefik.

Work for me macOS 14.7 (23H124)

Can't see the images on the comment. Links don't work.

"This private-user-images.githubusercontent.com page can’t be found"

@saghul
Copy link

saghul commented Oct 3, 2024

See the link some comments above: #1209 (comment)

@fterra-encora
Copy link

See the link some comments above: #1209 (comment)

oh thanks, that was my bad

@VladAdGad
Copy link

All of the comments above didn't help me. But I found out a solution. I'm providing screens you do not need to do anything else, remove override.yaml if you have one.
Screenshot 2024-11-21 at 11 52 01
Screenshot 2024-11-21 at 11 52 05

Software:

System Software Overview:

  System Version: macOS 14.7 (23H124)
  Kernel Version: Darwin 23.6.0
  Boot Volume: Macintosh HD
  Boot Mode: Normal
  Computer Name: -
  User Name: -
  Secure Virtual Memory: Enabled
  System Integrity Protection: Enabled
  Time since boot: 1 hour, 57 minutes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/volume Access to host volumes from inside the VM or containers kind/bug Something isn't working platform/macos triage/need-to-repro Needs to be reproduced by dev team triage/next-candidate Discuss if it should be moved to "Next" milestone
Projects
None yet
Development

No branches or pull requests