You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Create an Experiment and a Run from the Data Passing pipeline
Update the TTL_SECONDS_AFTER_WORKFLOW_FINISH env var in ml-pipeline-persistence Deployment to be something short, like 60
Wait for the Argo Workflow to succeed
After the configured time the persistence agent will mark the workflow as completed and remove the Argo Workflow
Access the UI and try to see the logs from one pod
It'll fail with Failed to retrieve pod logs.
When clicking on Details there's a popup with Error response: Could not get main container logs: Error: Unable to retrieve workflow status: [object Object].
Expected result
I would expect to see the Pod logs in the case the workflow is deleted, which are stored in MinIO as part of Argo.
Materials and reference
I see some references of the same error in #11010 and #11339 but am not entirely sure if it's the same.
Note that I'm only using upstream manifests and their example installation, that doesn't deviate the MinIO/Argo installation from what's provided in this repo.
Also, when looking at the requests, even if the Workflow is GCed I see requests for logs to the following URL: http://localhost:8080/pipeline/k8s/pod/logs?podname=tutorial-data-passing-h2c74-system-container-impl-306858994&runid=8542d9b2-89ee-47bc-a8fc-7210978115eb&podnamespace=kubeflow-user-example-com&createdat=2024-11-05
Not sure if this is expected, but seemed weird that it tries to get k8s pod logs when we know the pod doesn't exist in the cluster
Labels
/area frontend
/area backend
Impacted by this bug? Give it a 👍.
The text was updated successfully, but these errors were encountered:
Yes we should. I am wondering why they are not set by default in the upstream KFP manifests. I prefer to change them there.
Also many people change ARGO_KEYFORMAT = 'artifacts/{{workflow.name}}/{{workflow.creationTimestamp.Y}}/{{workflow.creationTimestamp.m}}/{{workflow.creationTimestamp.d}}/{{pod.name}}', in the argo workflow controller configmap, so its good if it is an environment variable directly exposed in the upstream KFP manifests and not just somewhere in the code.
Environment
Steps to reproduce
TTL_SECONDS_AFTER_WORKFLOW_FINISH
env var inml-pipeline-persistence
Deployment to be something short, like 60Failed to retrieve pod logs.
Details
there's a popup withError response: Could not get main container logs: Error: Unable to retrieve workflow status: [object Object].
Expected result
I would expect to see the Pod logs in the case the workflow is deleted, which are stored in MinIO as part of Argo.
Materials and reference
I see some references of the same error in #11010 and #11339 but am not entirely sure if it's the same.
Note that I'm only using upstream manifests and their example installation, that doesn't deviate the MinIO/Argo installation from what's provided in this repo.
Also, when looking at the requests, even if the Workflow is GCed I see requests for logs to the following URL:
http://localhost:8080/pipeline/k8s/pod/logs?podname=tutorial-data-passing-h2c74-system-container-impl-306858994&runid=8542d9b2-89ee-47bc-a8fc-7210978115eb&podnamespace=kubeflow-user-example-com&createdat=2024-11-05
Not sure if this is expected, but seemed weird that it tries to get k8s pod logs when we know the pod doesn't exist in the cluster
Labels
/area frontend
/area backend
Impacted by this bug? Give it a 👍.
The text was updated successfully, but these errors were encountered: