-
-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Creating new entities #7
Comments
hmm, that could happen if container Id in docker is changing. Not sure why that would happen |
I don't really understand how entities work in homeassistant but can the integration automatically delete entities no longer managed by portainer? |
its how entities work in docker, not HA. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. |
This issue was closed because it has been stalled for 5 days with no activity. |
The same thing happened to me: when a stack is stopped/restarted, the containers inside it are re-deployed with a new id, apparently, and a new entity is created every time. Would it be possible to reference the containers using their names instead of their ids? |
not sure if their name is constant, specially with compose if name is not specified |
If |
Are you able to test it and confirm that it wont change? For me it looks ok, but its better if more people can perform this test. |
thanks |
would love to see this implemented as well! in docker compose you can use:
This will give the container the same name everytime, i have this for all my containers, cause many of my containers connect via API and use the container name as connection point (DNS). The name_2 / name_3 etc is spamming my homeassistant with new entitiy's on every image pull upgrade / recreate. im on the waiting list for when this gets implemented, would love to see a quick overview in homeassistant on running containers, or crashed containers (not that this ever happens tho). |
I have the same issue as described. I guess it is related to containers being recreated/updated/restarted with updated container images - either by re-pulling the image manually or automatically e.g. by watchtower. I guess those container image updates result in a new container id (which doesn't seem to change when simply restarting a container). I just looked really really quickly through the code, but is maybe that the location that uses the container id as unique id and then result in new entity: |
I wanted to see if it was a quick-fix that I could make a quick PR for because it's been bugging me, but I realise now that it's pretty fundamental change in how the integration is configured & currently-working, but wanted to continue the discussion. I took some time tonight to try and track this back; hope my findings are helpful in diagnosing the source & proposing a solution. Please let me know if anything below is unclear! For context, here's a cut-down excerpt from my {
"aliases": [],
"area_id": null,
"categories": {},
"capabilities": null,
"config_entry_id": "8264ac4f59383c2af498c70f2320ddc6",
"device_class": null,
"device_id": "48d3d1e54469fa68a1c952ddb0fa7a03",
"disabled_by": null,
"entity_category": null,
"entity_id": "sensor.portainer_local_diun_3",
"hidden_by": null,
"icon": null,
"id": "5922f825c44547e9864c38d2ce0b585a",
"has_entity_name": true,
"labels": [],
"name": null,
"options": {
"conversation": {
"should_expose": false
}
},
"original_device_class": null,
"original_icon": null,
"original_name": "diun",
"platform": "portainer",
"supported_features": 0,
"translation_key": null,
"unique_id": "portainer-containers-11da9f5caeb19dcc3ec655b14f8dc51727084f5a3b9e51645457a0bff7caa9be",
"previous_unique_id": null,
"unit_of_measurement": null
},
{
"config_entry_id": "8264ac4f59383c2af498c70f2320ddc6",
"entity_id": "sensor.portainer_local_diun_2",
"id": "86e55c48b07445c1cd206ccee48d80d2",
"orphaned_timestamp": null,
"platform": "portainer",
"unique_id": "portainer-containers-415a5405cc3af21aec9360d24a4cfe72e455d3fae0459d23fb881107bd3f672c"
},
{
"config_entry_id": "8264ac4f59383c2af498c70f2320ddc6",
"entity_id": "sensor.portainer_local_diun",
"id": "4acbf7b0aa23cc3bddb45b364e962d7b",
"orphaned_timestamp": null,
"platform": "portainer",
"unique_id": "portainer-containers-2c1eb51417e5274fbe1a48cfdc2448a459faad902620c5cb7272aec8184de65c"
}, Here's the (REDACTED) matching/latest context for the {
"Command": "diun serve",
"Created": 1712409417,
"HostConfig": {
"(REDACTED)"
},
"Id": "11da9f5caeb19dcc3ec655b14f8dc51727084f5a3b9e51645457a0bff7caa9be",
"Image": "crazymax/diun:latest",
"ImageID": "sha256:ecac071c00b8af8887c851a2fadf16054dcae0ec4876de19cd6acc5133fcae2f",
"Labels": {
"(REDACTED)"
},
"Mounts": [
"(REDACTED)"
],
"Names": [
"/diun"
],
"NetworkSettings": {
"(REDACTED)"
},
"Ports": [],
"State": "running",
"Status": "Up 26 hours"
} Here's the (REDACTED) version of my docker-compose for that Portainer stack: version: "3.5"
services:
diun:
image: crazymax/diun:latest
container_name: diun
hostname: diun-portainer
command: serve
volumes:
- data:/data
- /var/run/docker.sock:/var/run/docker.sock
environment:
- DIUN_NOTIF_SLACK_WEBHOOKURL=${DIUN_NOTIF_SLACK_WEBHOOKURL}
- DIUN_PROVIDERS_DOCKER=${DIUN_PROVIDERS_DOCKER}
- DIUN_PROVIDERS_DOCKER_WATCHBYDEFAULT=${DIUN_PROVIDERS_DOCKER_WATCHBYDEFAULT}
- DIUN_PROVIDERS_DOCKER_WATCHSTOPPED=${DIUN_PROVIDERS_DOCKER_WATCHSTOPPED}
- DIUN_WATCH_FIRSTCHECKNOTIF=${DIUN_WATCH_FIRSTCHECKNOTIF}
- DIUN_WATCH_JITTER=30s
- DIUN_WATCH_SCHEDULE=0 */6 * * *
- DIUN_WATCH_WORKERS=20
- TZ=${TZ}
restart: unless-stopped
volumes:
data:
(REDACTED) We can see from the above that the Portainer entry's
This mapping from API response to HomeAssistant entity is mainly done as part of the Understandably, for people using this integration to monitor long-running single, specific container instances, this is helpful, so we shouldn't change this functionality without understanding the impact to that subset of users, but ... I would suggest that for most people that use containers as they were designed (immutable, short-lived, easily-replaced images, leveraging external sources for maintaining state) then this is not ideal. I would suggest that we should leverage the Alternatively, we could leverage the in-built migration functionality of HomeAssistant's Entity Registry (similar to other integrations) to specify the To my (current) understanding, this should only require a small change in one of the following places:
Please let me know what you think & how we should continue - I'd love to get this resolved ASAP as some of my MANY containers are up to |
In doing some follow-up research around the usage of that Their implementation is visible here, but as I mentioned above - it's going to need a pretty significant refactoring of the current Let me know what you think? |
I support the idea of using the Name value. The current id-based implementation is spamming me by creating properties (I am using Watchtower to automatically update images).
|
Same here, also using Watchtower with more than 30 containers. Almost every day I have to replace entities in my dashboard because the entity-ids get incremented and the old ones get unavailable. |
Hello, But interesting is that How to reproduce itI did test and hit Reloading of the integration fix it and only last created sensor is active and the rest is mark as no longer being provided by portainer integration. Summarize the issue (my POV)
|
Describe the issue
Sometimes (usually after a restart of homeassistant) the list of entities will have duplicates. So I'll have two containers:
sensor.portainer_myserver_watchtower_2
sensor.portainer_myserver_watchtower_3
When I click into
sensor.portainer_myserver_watchtower_2
it says the entity is no longer in use by the integration:This means my list of entities keeps growing until I manually remove old entities.
How to reproduce the issue
Hard to reproduce, seems to happen randomly when I restart homeassistant or the machine itself.
Expected behavior
Container entities should always be the same and new ones should not be created:
There should only ever be one entity:
sensor.portainer_myserver_watchtower
The text was updated successfully, but these errors were encountered: