Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Regression: array items[0,1] must be unique starting from 2.24.1 #11371

Closed
paolomainardi opened this issue Jan 18, 2024 · 59 comments · Fixed by compose-spec/compose-go#533
Assignees
Labels

Comments

@paolomainardi
Copy link

Description

As per the subject, starting from 2.24.1, I am encountering this issue when there are overrides.

Steps To Reproduce

Create 2 files:

  1. docker-compose.yaml
version: "3.8"

services:
  test:
    image: ubuntu:latest
    command: sleep infinity
    volumes:
      - ./src:/src
  1. docker-compose.override.yaml
services:
  test:
    volumes:
      - ./src:/src

WIth 2.23.3:

❯ dc version
Docker Compose version 2.23.3
❯ dc down -v

With 2.24.1:

❯ ./dc-2.24.1 version
Docker Compose version v2.24.1
❯ ./dc-2.24.1 down -v
validating /home/paolo/temp/dc-compose/docker-compose.override.yml: services.test.volumes array items[0,1] must be unique

Compose Version

Docker Compose version v2.24.1

Docker Environment

❯ docker info
Client:
 Version:    24.0.7
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  0.12.1
    Path:     /usr/lib/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  2.23.3
    Path:     /usr/lib/docker/cli-plugins/docker-compose

Server:
 Containers: 15
  Running: 9
  Paused: 0
  Stopped: 6
 Images: 141
 Server Version: 24.0.7
 Storage Driver: btrfs
  Btrfs:
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 71909c1814c544ac47ab91d2e8b84718e517bb99.m
 runc version:
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 6.6.11-2-lts
 Operating System: Arch Linux
 OSType: linux
 Architecture: x86_64
 CPUs: 24
 Total Memory: 30.49GiB
 Name: paolo-cto-arch-wood
 ID: ZRJM:NTZC:JCYV:OSU3:VB2H:N2CW:ZCLD:PCGW:JGT5:B2BR:445A:GEHV
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

Anything else?

No response

@ndeloof
Copy link
Contributor

ndeloof commented Jan 18, 2024

Thanks for reporting.
This only applies as both base and override compose files declare the exact same volume. Can you please explain why you do so ?

@paolomainardi
Copy link
Author

Thank you for getting back to me so promptly, @ndeloof. In my case, there seems to be no valid reason for the issue I'm facing. It appears to be a chain of docker-compose files where the last one is adding the same volumes again. Unfortunately, I cannot modify it as the base docker-compose files are managed by a custom framework.

In version <= 2.23, errors were ignored or overwritten silently. However, this is no longer the case, resulting in a breaking change.

@ndeloof
Copy link
Contributor

ndeloof commented Jan 18, 2024

ok, just wanted to check I was not missing a hack-ish usecase :)
A fix is on its way

@paolomainardi
Copy link
Author

Thanks @ndeloof :)

@freyjadomville
Copy link

freyjadomville commented Jan 18, 2024

I'm also getting a (possibly similar) regression with the following as a single file, but the same error message. I can also build this successfully with 2.23.3 - the uniqueness constraint here is too strict as the two OpenSearch containers in this compose file are for different services:

version: '3.8'

services:
  postgres:
    image: postgres:15
    networks:
      - client-portal
    environment:
      POSTGRES_PASSWORD: ${DATABASE_PASSWORD}
      POSTGRES_USER: ${DATABASE_USERNAME}
      POSTGRES_DB: ${DATABASE_NAME}
      PG_DATA: /var/lib/postgresql/data
    volumes:
      - portalpgdata:/var/lib/postgresql/data
    ports:
      - '${DATABASE_PORT}:5432'
    extra_hosts:
      - "host.docker.internal:host-gateway"

  pgadmin:
    extends:
      file: docker-compose.excel.yml
      service: pgadmin
    networks:
      - client-portal
    depends_on:
      - postgres
      - data-postgres
    extra_hosts:
      - "host.docker.internal:host-gateway"

  cms:
    build:
      dockerfile: cms/Dockerfile.dev
    volumes:
      - ./cms/config:/opt/app/config
      - ./cms/src:/opt/app/src
      - ./cms/package.json:/opt/package.json
      - portalcmsmedia:/opt/app/public/uploads
      - ./cms/types:/opt/app/types
      - /opt/app/src/plugins # comment this line out (and the one below) to do plugin development
    ports:
      - '${CMS_API_PORT}:1337'
      #- '8000:8000' # uncomment this line (and the one above) to do plugin development and connect to localhost:8000
    environment:
      HOST: '0.0.0.0'
      PORT: '1337'
      CMS_URL: 'http://localhost:${CMS_API_PORT}'
      APP_KEYS: <value>
      API_TOKEN_SALT: <value>
      ADMIN_JWT_SECRET: <value>
      JWT_SECRET: <value>
      TRANSFER_TOKEN_SALT: <value>
      DATABASE_CLIENT: 'postgres'
      DATABASE_HOST: 'postgres'
      DATABASE_PORT: '5432'
      DATABASE_NAME: ${DATABASE_NAME}
      DATABASE_USERNAME: ${DATABASE_USERNAME}
      DATABASE_PASSWORD: ${DATABASE_PASSWORD}
      DATABASE_SSL: 'false'
      DATABASE_POOL_MIN: '0'
      NEXT_REVALIDATE_TOKEN: ${NEXT_REVALIDATE_TOKEN}
      AUTH_SERVICE_HOST: ${AUTH_SERVICE_HOST}
      AUTH_SERVICE_PORT: ${AUTH_SERVICE_PORT}
      STRAPI_SES_AWS_ACCESS_KEY_ID: ${STRAPI_SES_AWS_ACCESS_KEY_ID}
      STRAPI_SES_AWS_SECRET_ACCESS_KEY: ${STRAPI_SES_AWS_SECRET_ACCESS_KEY}
      IS_LOCAL: 'true'
      REPORTS_BUCKET: ${REPORTS_BUCKET}
      REPORTS_QUEUE: ${REPORTS_QUEUE}
      REGION: ${REGION}
      LOCALSTACK_ENDPOINT: 'http://host.docker.internal:4566'
      CMS_PREVIEW_TOKEN: ${CMS_PREVIEW_TOKEN}
      FRONTEND_URL: ${FRONTEND_URL}
      FRONTEND_HOST: ${FRONTEND_HOST}
      FRONTEND_PORT: ${FRONTEND_PORT}
    networks:
      - client-portal
    command: npm run develop # -- --watch-admin # uncomment this line for plugin development
    depends_on:
      - postgres
      - localstack
      - auth-service
    extra_hosts:
      - "host.docker.internal:host-gateway"

  frontend:
    build:
      dockerfile: frontend/Dockerfile
      target: develop
    volumes:
      - ./frontend:/usr/src/app
      - ./common-data-client:/usr/src/common-data-client
      - portalcmsmedia:/usr/src/app/public/uploads
      - /usr/src/app/.next
      - /usr/src/app/node_modules # Anonymous volume to prevent the container's node_modules from being overwritten by the local one
    ports:
      - '${FRONTEND_PORT}:${FRONTEND_PORT}'
      - 0.0.0.0:9232:9229
      - 0.0.0.0:9233:9230
    environment:
      NODE_ENV: 'development'
      WATCHPACK_POLLING: 'true'

      PORT: ${FRONTEND_PORT}
      AUTH_SERVICE_HOST: ${AUTH_SERVICE_HOST}
      AUTH_SERVICE_PORT: ${AUTH_SERVICE_PORT}
      PUBLISHED_DATA_API_URL: ${PUBLISHED_DATA_API_URL}
      THOUGHTSPOT_HOST: ${THOUGHTSPOT_HOST}
      THOUGHTSPOT_SECRET_KEY: ${THOUGHTSPOT_SECRET_KEY}
      THOUGHTSPOT_DATA_SOURCE: ${THOUGHTSPOT_DATA_SOURCE}
      THOUGHTSPOT_OVERRIDE_SUBSCRIPTION: ${THOUGHTSPOT_OVERRIDE_SUBSCRIPTION}
      NEXTAUTH_URL: 'http://localhost:${FRONTEND_PORT}'
      NEXT_REVALIDATE_TOKEN: ${NEXT_REVALIDATE_TOKEN}
      NEXTAUTH_SECRET: '0CaFHo7J6Q9xzYRPkjtLn5FEmgLl7Cp86MJTVjEzuUI='
      MIXPANEL_PROJECT_TOKEN: ${MIXPANEL_PROJECT_TOKEN}
    networks:
      - client-portal
    command: npm run dev
    depends_on:
      - auth-service
      - cms
      - published-data-api
    extra_hosts:
      - "host.docker.internal:host-gateway"

  auth-service:
    build:
      dockerfile: auth-service/Dockerfile
      target: development
    volumes:
      - ./auth-service:/usr/src/app
      - /usr/src/app/node_modules # Anonymous volume to prevent the container's node_modules from being overwritten by the local one
    ports:
      - '${AUTH_SERVICE_PORT}:${AUTH_SERVICE_PORT}'
      - '9231:9229' # debug port
    environment:
      LOCALSTACK_ENDPOINT: 'http://host.docker.internal:4566'
      IS_LOCAL: 'true'
    env_file:
      - .env
    networks:
      - client-portal
    command: npm run start:debug
    depends_on:
      opensearch:
        condition: service_healthy
    extra_hosts:
      - "host.docker.internal:host-gateway"

  opensearch:
    image: opensearchproject/opensearch:2.9.0
    container_name: portal-opensearch
    environment:
      - compatibility.override_main_response_version=true
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
      - "DISABLE_INSTALL_DEMO_CONFIG=true"
      - "DISABLE_SECURITY_PLUGIN=true"
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - opensearch-data:/usr/share/opensearch/data
      - ./auth-service/src/opensearch/synonyms:/usr/share/opensearch/config/analysis
    ports:
      - ${OPENSEARCH_PORT_1}:9200 
      - ${OPENSEARCH_PORT_2}:9600
    networks:
      - client-portal
    healthcheck:
      test: "curl -s http://opensearch:9200 > /dev/null || exit 1"
      interval: 2s
      timeout: 30s
      retries: 50
      start_period: 1s
    extra_hosts:
      - "host.docker.internal:host-gateway"

  opensearch-dashboards:
    image: opensearchproject/opensearch-dashboards:2.9.0
    container_name: portal-opensearch-dashboards
    ports:
      - 0.0.0.0:${OPENSEARCH_DASHBOARDS_PORT}:5601
    expose:
      - "5601"
    environment:
      - OPENSEARCH_HOSTS=["http://opensearch:9200"]
      - "DISABLE_SECURITY_DASHBOARDS_PLUGIN=true"
    networks:
      - client-portal
    extra_hosts:
      - "host.docker.internal:host-gateway"

  localstack:
    image: localstack/localstack:2.3.2
    hostname: localstack
    restart: always
    healthcheck:
      test: [ "CMD", "curl", "http://_localstack/health?reload" ]
    environment:
      - SERVICES=s3,sqs,sns
      - DATA_DIR=/tmp/localstack/data
      - DEBUG=1
      - AWS_ACCESS_KEY_ID=test
      - AWS_SECRET_ACCESS_KEY=test
      - AWS_DEFAULT_REGION=eu-west-2
      - DOCKER_HOST=unix:///var/run/docker.sock
      - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
      - "/var/run/docker.sock:/var/run/docker.sock"
    ports:
      - "4566:4566"
    volumes:
      - localstack-data:/tmp/localstack:rw
      - ./setup/entrypoints/create_localstack_resources.sh:/etc/localstack/init/ready.d/init-aws.sh
    extra_hosts:
      - "host.docker.internal:host-gateway"
  
  published-data-api:
    extends:
      file: docker-compose.excel.yml
      service: published-data-api
    networks:
      - client-portal
    depends_on:
      - data-postgres

  data-postgres:
    extends:
      file: docker-compose.excel.yml
      service: data-postgres
    networks:
      - client-portal

  tracking-opensearch:
    extends:
      file: docker-compose.excel.yml
      service: opensearch
    networks:
      - client-portal

  tracking-opensearch-dashboards:
    extends:
      file: docker-compose.excel.yml
      service: opensearch-dashboards
    environment:
      - OPENSEARCH_HOSTS=["http://tracking-opensearch:9200"]
      - "DISABLE_SECURITY_DASHBOARDS_PLUGIN=true"
    networks:
      - client-portal
    
  mock-auth-server:
    extends:
      file: docker-compose.excel.yml
      service: mock-auth-server
    networks:
      - client-portal

networks:
  client-portal:
  published-data:
volumes:
  portalpgdata:
  portalpgadmin:
  portalcmsmedia:
  opensearch-data:
  localstack-data:
  pgdata:
  pgadmin:
  tracking-opensearch-data:
$ docker-compose up --build -V
validating /home/freyjadomville/git/project/docker-compose.yml: services.tracking-opensearch-dashboards.environment array items[1,3] must be unique

@ndeloof
Copy link
Contributor

ndeloof commented Jan 18, 2024

@freyjadomville same PR will fix your issue, but in the meantime you just can remove redefinition of environment in tracking-opensearch-dashboards service declaration, as by extends it will already get them set.

@logopk
Copy link

logopk commented Jan 18, 2024

I get the same for ports.

port 1514/tcp in both docker-compose.yml AND override.

@ndeloof
Copy link
Contributor

ndeloof commented Jan 19, 2024

@logopk same fix will apply. Any reason you use this duplicated declaration ?

@logopk
Copy link

logopk commented Jan 19, 2024

@ndeloof : I can not tell you exactly. As this is on my test environment this port may have been in the override first and then later got into the regular prod compose file.

same applies to the volume problem - however there it was also needed for the external:true declaration.

@paolomainardi
Copy link
Author

Thanks a lot @ndeloof

zsarge added a commit to nkcyber/cybersword that referenced this issue Jan 20, 2024
This commit resovles the issue described in:
<docker/compose#11371>
zsarge added a commit to nkcyber/cybersword that referenced this issue Jan 20, 2024
This commit resovles the issue described in:
<docker/compose#11371>
@radim-ek
Copy link

So, this is same thing?

validating /root/qfieldcloud/docker-compose.override.local.yml: services.app.environment array items[1,46] must be unique

version: '3.9'

services:

  app:
    build:
      args:
        - DEBUG_BUILD=1
    ports:
      # allow direct access without nginx
      - ${DJANGO_DEV_PORT}:8000
      - ${DEBUG_DEBUGPY_APP_PORT:-5678}:5678
    volumes:
      # mount the source for live reload
      - ./docker-app/qfieldcloud:/usr/src/app/qfieldcloud
    environment:
      DEBUG: 1
    command: python3 -m debugpy --listen 0.0.0.0:5678 manage.py runserver 0.0.0.0:8000
    depends_on:
      - db

  worker_wrapper:
    scale: ${QFIELDCLOUD_WORKER_REPLICAS}
    build:
      args:
        - DEBUG_BUILD=1
    ports:
      - ${DEBUG_DEBUGPY_WORKER_WRAPPER_PORT:-5679}:5679
    environment:
      QFIELDCLOUD_LIBQFIELDSYNC_VOLUME_PATH: ${QFIELDCLOUD_LIBQFIELDSYNC_VOLUME_PATH}
    volumes:
      # mount the source for live reload
      - ./docker-app/qfieldcloud:/usr/src/app/qfieldcloud
      - ./docker-app/worker_wrapper:/usr/src/app/worker_wrapper
    command: python3 -m debugpy --listen 0.0.0.0:5679 manage.py dequeue

  smtp4dev:
    image: rnwood/smtp4dev:v3
    restart: always
    ports:
      # Web interface
      - ${SMTP4DEV_WEB_PORT}:80
      # SMTP server
      - ${SMTP4DEV_SMTP_PORT}:25
      # IMAP
      - ${SMTP4DEV_IMAP_PORT}:143
    volumes:
        - smtp4dev_data:/smtp4dev
    environment:
      # Specifies the server hostname. Used in auto-generated TLS certificate if enabled.
      - ServerOptions__HostName=smtp4dev

  db:
    image: postgis/postgis:13-3.1-alpine
    restart: unless-stopped
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data/
    ports:
      - ${HOST_POSTGRES_PORT}:5432
    command: ["postgres", "-c", "log_statement=all", "-c", "log_destination=stderr"]

  memcached:
    ports:
      - "${MEMCACHED_PORT}:11211"

  qgis:
    volumes:
      # allow local development for `libqfieldsync` if host directory present; requires `PYTHONPATH=/libqfieldsync:${PYTHONPATH}`
      - ./docker-qgis/libqfieldsync:/libqfieldsync:ro

  geodb:
    image: postgis/postgis:12-3.0
    restart: unless-stopped
    volumes:
      - geodb_data:/var/lib/postgresql
    environment:
      POSTGRES_DB: ${GEODB_DB}
      POSTGRES_USER: ${GEODB_USER}
      POSTGRES_PASSWORD: ${GEODB_PASSWORD}
    ports:
      - ${GEODB_PORT}:5432

  minio:
    image: minio/minio:RELEASE.2023-04-07T05-28-58Z
    restart: unless-stopped
    volumes:
      - minio_data1:/data1
      - minio_data2:/data2
      - minio_data3:/data3
      - minio_data4:/data4
    environment:
      MINIO_ROOT_USER: ${STORAGE_ACCESS_KEY_ID}
      MINIO_ROOT_PASSWORD: ${STORAGE_SECRET_ACCESS_KEY}
      MINIO_BROWSER_REDIRECT_URL: http://${QFIELDCLOUD_HOST}:${MINIO_BROWSER_PORT}
    command: server /data{1...4} --console-address :9001
    healthcheck:
        test: [
          "CMD",
          "curl",
          "-A",
          "Mozilla/5.0 (X11; Linux x86_64; rv:30.0) Gecko/20100101 Firefox/30.0",
          "-f",
          "${STORAGE_ENDPOINT_URL}/minio/index.html"
        ]
        interval: 5s
        timeout: 20s
        retries: 5
    ports:
      - ${MINIO_BROWSER_PORT}:9001
      - ${MINIO_API_PORT}:9000

  createbuckets:
    image: minio/mc
    depends_on:
      minio:
        condition: service_healthy
    entrypoint: >
      /bin/sh -c "
      /usr/bin/mc config host add myminio ${STORAGE_ENDPOINT_URL} ${STORAGE_ACCESS_KEY_ID} ${STORAGE_SECRET_ACCESS_KEY};
      /usr/bin/mc mb myminio/${STORAGE_BUCKET_NAME};
      /usr/bin/mc policy set download myminio/${STORAGE_BUCKET_NAME}/users;
      /usr/bin/mc version enable myminio/${STORAGE_BUCKET_NAME};
      exit 0;
      "

volumes:
  postgres_data:
  geodb_data:
  smtp4dev_data:
  minio_data1:
  minio_data2:
  minio_data3:
  minio_data4:

@ErjanGavalji
Copy link

ErjanGavalji commented Jan 22, 2024

Hello all,

This is just in case this scenario was not covered by PR 533. Here is a simple case where docker compose complains about the environments setting (services.myservice.environment array items[0,1] must be unique):

main.yml

services:
  myservice:
    image: alpine:latest
    environment: 
      - MYVAR=MyVarValue

secondary.yml:

name: my-project
services:
  myservice:
    extends:
      file: ${PWD}/main.yml
      service: myservice

command:

docker compose -f main.yml -f secondary.yml up

Edit:
If that matters, I removed everything just to make the scenario simple. I need the second file to declare volumes to the service that are not always needed.

@danstewart
Copy link

This is working fine with v2.24.2 - the duplicate environment entries is working too, so the fix did cover that.

@rappzons
Copy link

My company are running Docker Desktop on Mac and we got affected by this bug without even changing docker desktop version, how is that possible? The latest version used by Docker Desktop according to their release notes are: v2.23.3
https://docs.docker.com/desktop/release-notes/

@ndeloof
Copy link
Contributor

ndeloof commented Jan 23, 2024

@rappzons seems like you got docker compose installed manually, check docker desktop menu :
Capture d’écran 2024-01-23 à 11 17 21

@rappzons
Copy link

rappzons commented Jan 23, 2024

Update: This was our docker-in-docker setup that had downloaded the latest version of docker-compose, sorry for the confusion.

Thanks for the response @ndeloof . Seems like I don't have that option. Perhaps because I've got the free version.
image

It really looks like I'm running 2.23.3 of compose.

image

image

Perhaps this is not the best thread for this :D but I found it really weird that I'm affected by this issue.

@ErjanGavalji
Copy link

This is working fine with v2.24.2 - the duplicate environment entries is working too, so the fix did cover that.

Right. The issue continues appearing with links and profiles though. Here is the repro:

main.yml:

name: my-project
services:
  myfirstservice:
    image: alpine:latest
  myservice:
    image: alpine:latest
    environment:
      - MYVAR=MyVarValue
    links:
      - myfirstservice
    profiles:
      - profile1
      - profile2

secondary.yml:

cat secondary.yml
name: my-project
services:
  myservice:
    extends:
      file: ${PWD}/main.yml
      service: myservice

command:

docker compose -f main.yml -f secondary.yml up

@matanmarciano
Copy link

This is working fine with v2.24.2 - the duplicate environment entries is working too, so the fix did cover that.

Right. The issue continues appearing with links and profiles though. Here is the repro:

main.yml:

name: my-project
services:
  myfirstservice:
    image: alpine:latest
  myservice:
    image: alpine:latest
    environment:
      - MYVAR=MyVarValue
    links:
      - myfirstservice
    profiles:
      - profile1
      - profile2

secondary.yml:

cat secondary.yml
name: my-project
services:
  myservice:
    extends:
      file: ${PWD}/main.yml
      service: myservice

command:

docker compose -f main.yml -f secondary.yml up

Yeha, also here...

@solarlodge
Copy link

Here the very same issue with links and tmpfs...

@giorgiabosello
Copy link

Even with extra_hosts.

@ndeloof
Copy link
Contributor

ndeloof commented Jan 29, 2024

@ihor-sviziev this is a local build, not signed/certified. You need to go to system preference/security to approve running such "unsecure" software - or wait for next release delivered by Docker Desktop :)

@glours
Copy link
Contributor

glours commented Jan 29, 2024

@ihor-sviziev yes because the signature of the binary for MacOs is done as part of the Docker Desktop release, you have to approve it manually in System settings>Privacy & Security
You could use those binaries which use the v2.0.0-rc.3 release of compose-go

@ihor-sviziev
Copy link

@glours I can confirm, the fixed version fixes this issue for me.

@glours
Copy link
Contributor

glours commented Jan 29, 2024

@ihor-sviziev thanks for the feedback

@matanmarciano
Copy link

@glours the new fixes should be included in https://github.com/docker/compose/releases/tag/v2.24.4?

@glours
Copy link
Contributor

glours commented Jan 30, 2024

@matanmarciano yes

@matanmarciano
Copy link

@glours it is still not released to: https://download.docker.com/linux/ubuntu/dists/focal/pool/stable/amd64/

@glours
Copy link
Contributor

glours commented Jan 30, 2024

@matanmarciano no indeed, a release of https://github.com/docker/docker-ce-packaging is planned later this week

@ihor-sviziev
Copy link

ihor-sviziev commented Feb 1, 2024

@glours, I just received the Docker Desktop update to v4.27.1, but unfortunately, the fixed version of docker-compose wasn't included for some reason. When can we expect it?

@glours
Copy link
Contributor

glours commented Feb 1, 2024

@ihor-sviziev Yes they decided to focus on the security fixes for this release, a next patch release of Docker Desktop is planned for next week... Sorry for the delay, anyway you can add manually the binary of Compose v2.24.5 to your ~/.docker/cli-plugins directory with the name docker-compose

@indjeto
Copy link

indjeto commented Feb 4, 2024

I had services.php.extra_hosts array items[0,1] must be unique error and after upgrade docker-compose-plugin (2.24.5-1~ubuntu.20.04~focal) over (2.24.2-1~ubuntu.20.04~focal) the problem is gone.

@ErjanGavalji
Copy link

Hello again.

I'm afraid there is now a port conflict error when the containers get into running mode. Try this:

main.yml:

name: my-project
services:
  myfirstservice:
    image: node:latest
  myservice:
    image: node:latest
    environment:
      - MYVAR=MyVarValue
    ports:
      - 8080:8080
    links:
      - myfirstservice
    command:
      - node
      - -e
      - "require('http').createServer((req, res) => res.end(`Hello World! $${new Date()}`)).listen(8080);"

secondary.yml:

name: my-project
services:
  myservice:
    extends:
      file: ${PWD}/main.yml
      service: myservice

If you run the command of docker compose -f main.yml, the service will start as expected.

If you run the command by using both the configuration files, docker compose -f main.yml -f secondary.yml, you will get the error of

[+] Running 1/0
 ✔ Container my-project-myfirstservice-1  Created                                                                                      0.0s 
Attaching to myfirstservice-1, myservice-1
Error response from daemon: driver failed programming external connectivity on endpoint my-project-myservice-1 (6e5afc6bd5546dc89feb0f4a019ba2f783e6b39d535fa4ed36a1eefb67664621): Bind for 0.0.0.0:8080 failed: port is already allocated

Versions used:

Docker version 25.0.2, build 29cf629
Docker Compose version v2.24.5

@visuallization
Copy link

Docker Desktop 4.27.2 luckily fixed the issues for us!

@k1w1m8
Copy link

k1w1m8 commented Feb 22, 2024

Is this released? No milestone assigned...

@glours
Copy link
Contributor

glours commented Feb 22, 2024

@k1w1m8 those fixes have been released in Compose v2.24.4

@derekcentrico
Copy link

I don't mean to be a pain here, but I'm seeing similar on v2.25.0. Figured best to post on this thread versus a new issue as it seems related to my newb self?

"validating /home/docker/docker-compose.override.yml: services.scrutinyanalogj.devices array items[0,11] must be unique"

  scrutinyanalogj:
    ports:
      - '86:80'
      - '8886:8080'
    volumes:
      - '/home/docker/scrutinyanalogj:/opt/scrutiny/config'
      - '/home/docker/influxdb2:/opt/scrutiny/influxdb'
      - '/run/udev:/run/udev:ro'
    restart: always
    logging:
      options:
        max-size: 1g
    container_name: scrutinyanalogj
    environment:
      - PUID=1000
      - PGID=996
      - TZ=America/New_York
    devices:
      - '/dev/nvme0n1p1:/dev/nvme0'
      - '/dev/nvme1n1p1:/dev/nvme1'
      - '/dev/nvme2n1p1:/dev/nvme2'
      - '/dev/sda:/dev/sda'
      - '/dev/sdb:/dev/sdb'
      - '/dev/sdc:/dev/sdc'
      - '/dev/sdd:/dev/sdd'
      - '/dev/sde:/dev/sde'
      - '/dev/sdf:/dev/sdf'
      - '/dev/sdg:/dev/sdg'
      - '/dev/sdh:/dev/sdh'
    cap_add:
      - SYS_ADMIN
      - SYS_RAWIO
    image: ghcr.io/analogj/scrutiny:master-omnibus
    networks:
      vpnsys_net:
        ipv4_address: '172.22.0.109'

@ndeloof
Copy link
Contributor

ndeloof commented Mar 27, 2024

@derekcentrico tried to reproduce, works for me:

$ docker compose version
Docker Compose version v2.24.6-desktop.1
$ docker compose config
services:
  test:
    devices:
      - /dev/nvme0n1p1:/dev/nvme0
      - /dev/nvme1n1p1:/dev/nvme1
      - /dev/nvme2n1p1:/dev/nvme2
      - /dev/sda:/dev/sda
      - /dev/sdb:/dev/sdb
      - /dev/sdc:/dev/sdc
      - /dev/sdd:/dev/sdd
      - /dev/sde:/dev/sde
      - /dev/sdf:/dev/sdf
      - /dev/sdg:/dev/sdg
      - /dev/sdh:/dev/sdh

@derekcentrico
Copy link

derekcentrico commented Mar 27, 2024

@derekcentrico tried to reproduce, works for me:

$ docker compose version
Docker Compose version v2.24.6-desktop.1
$ docker compose config
services:
  test:
    devices:
      - /dev/nvme0n1p1:/dev/nvme0
      - /dev/nvme1n1p1:/dev/nvme1
      - /dev/nvme2n1p1:/dev/nvme2
      - /dev/sda:/dev/sda
      - /dev/sdb:/dev/sdb
      - /dev/sdc:/dev/sdc
      - /dev/sdd:/dev/sdd
      - /dev/sde:/dev/sde
      - /dev/sdf:/dev/sdf
      - /dev/sdg:/dev/sdg
      - /dev/sdh:/dev/sdh

Thanks for checking. You led me down the ultra newb path of a reboot. I don't understand how that mattered for anything as all was fine previously+ had just rebooted after an apt upgrade but hey it works now!

@alex-Symbroson
Copy link

I recently had the profiles array items[0,1] must be unique and updating to 4.28.0 fixed this error for me
however, after that I got an error service "myservice" can't be used with 'extends' as it declare 'depends_on'
so I extracted a base service definitions and defined multiple target services based on it, but after that the compose build command is trapped in an infinite resolve image config for docker.io/docker/dockerfile:1

Here is a minimal configuration where this happens:

version: '3'
services:
  service-base:
    build: .

  service-test:
    extends: 
      service: service-base
    build:
      target: .

@YtvwlD
Copy link

YtvwlD commented Apr 30, 2024

I'm getting this error also for security_opt. I'm on 2.26.1.

I have the following in both docker-compose.yaml and in docker-compose.override.yaml:

services:
   traefik:
     security_opt:
       - label:type:container_runtime_t

Is this expected or is this a bug? (This bug or a new one? :D)

@ndeloof
Copy link
Contributor

ndeloof commented Apr 30, 2024

@YtvwlD please open a new bug for this

@GuillaumeCisco
Copy link

I still experiment the issue with:

docker compose version
Docker Compose version v2.29.6

error message:

validating /home/$USER/Projects/docker-compose.yml: services.foo.extra_hosts array items[0,1] must be unique

Should have been resolved?

@ndeloof
Copy link
Contributor

ndeloof commented Sep 23, 2024

@GuillaumeCisco sounds like a separate, while comparable issue related to extra_hosts. Please fill a new issue so we can investigate

@GuillaumeCisco
Copy link

Thanks @ndeloof ,w ill do right away ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.