Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BR] Unable to connect to Plex #91

Closed
RyuunosukeDS3 opened this issue Dec 26, 2023 · 13 comments
Closed

[BR] Unable to connect to Plex #91

RyuunosukeDS3 opened this issue Dec 26, 2023 · 13 comments
Assignees

Comments

@RyuunosukeDS3
Copy link

Describe the bug
Plex wont let me connect in any form

To Reproduce
Steps to reproduce the behavior:

  • Download the helm charts
  • Create an application on ArgoCD

Expected behavior
Plex would connect to UI

Screenshots
image

Environment:
Client Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.13-3+b0496756fa948e", GitCommit:"b0496756fa948e718d67351ed8e5293c3a28f0b8", GitTreeState:"clean", BuildDate:"2022-06-08T10:21:43Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/arm64"}

Additional context
There is my values file:

# Default values for k8s-mediaserver.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

general:
  ingress_host: k8s-mediaserver.k8s.test
  plex_ingress_host: k8s-plex.k8s.test
  image_tag: latest
  podDistribution: cluster # can be "spread" or "cluster"
  #UID to run the process with
  puid: 1000
  #GID to run the process with
  pgid: 1000
  #Persistent storage selections and pathing
  storage:
    customVolume: true  #set to true if not using a PVC (must provide volume below)
    pvcName: mediaserver-pvc
    accessMode: ""
    size: 1500Gi
    pvcStorageClass: ""
    # the path starting from the top level of the pv you're passing. If your share is server.local/share/, then tv is server.local/share/media/tv
    subPaths:
      tv: media/tv
      movies: media/movies
      downloads: downloads
      transmission: transmission
      sabnzbd: sabnzbd
      config: config
    volumes:
      hostPath:
        path: /media/ryuunosukeds3/Raspberry_HD/Docker/movies
  ingress:
    ingressClassName: ""
  nodeSelector: {}

sonarr:
  enabled: true
  container:
    image: docker.io/linuxserver/sonarr
    nodeSelector: {}
    port: 8989
  service:
    type: LoadBalancer
    targetPort: 80
    port: 8989
    nodePort:
    extraLBService: false
    extraLBAnnotations: {}
    # Defines an additional LB service, requires cloud provider service or MetalLB
  ingress:
    enabled: false
    annotations: {}
    path: /sonarr
    tls:
      enabled: false
      secretName: ""
  resources: {}
  volume: {}
    #name: pvc-sonarr-config
    #storageClassName: longhorn
    #annotations:
    #  my-annotation/test: my-value
    #labels:
    #  my-label/test: my-other-value
    #accessModes: ReadWriteOnce
    #storage: 5Gi
    #selector: {}

radarr:
  enabled: true
  container:
    image: docker.io/linuxserver/radarr
    nodeSelector: {}
    port: 7878
  service:
    type: LoadBalancer
    targetPort: 80
    port: 7878
    nodePort:
    # Defines an additional LB service, requires cloud provider service or MetalLB
    extraLBService: false
    extraLBAnnotations: {}
  ingress:
    enabled: false
    annotations: {}
    path: /radarr
    tls:
      enabled: false
      secretName: ""
  resources: {}
  volume: {}
    #name: pvc-radarr-config
    #storageClassName: longhorn
    #annotations: {}
    #labels: {}
    #accessModes: ReadWriteOnce
    #storage: 5Gi
    #selector: {}

jackett:
  enabled: true
  container:
    image: docker.io/linuxserver/jackett
    nodeSelector: {}
    port: 9117
  service:
    type: LoadBalancer
    targetPort: 80
    port: 9117
    nodePort:
    extraLBService: false
    extraLBAnnotations: {}
    # Defines an additional LB service, requires cloud provider service or MetalLB
  ingress:
    enabled: false
    annotations: {}
    path: /jackett
    tls:
      enabled: false
      secretName: ""
  resources: {}
  volume: {}
  #  name: pvc-jackett-config
  #  storageClassName: longhorn
  #  annotations: {}
  #  labels: {}
  #  accessModes: ReadWriteOnce
  #  storage: 5Gi
  #  selector: {}

transmission:
  enabled: true
  container:
    image: docker.io/linuxserver/transmission
    nodeSelector: {}
    port:
      utp: 9091
      peer: 51413
  service:
    utp:
      type: LoadBalancer
      targetPort: 80
      port: 9091
      # if type is NodePort, nodePort must be set
      nodePort:
      # Defines an additional LB service, requires cloud provider service or MetalLB
      extraLBService: false
      extraLBAnnotations: {}
    peer:
      type: LoadBalancer
      targetPort: 51413
      port: 51413
      # if type is NodePort, nodePort and nodePortUDP must be set
      nodePort:
      nodePortUDP:
      # Defines an additional LB service, requires cloud provider service or MetalLB
      extraLBService: false
      extraLBAnnotations: {}
  ingress:
    enabled: false
    annotations: {}
    path: /transmission
    tls:
      enabled: false
      secretName: ""
  config:
    auth:
      enabled: false
      username: ""
      password: ""
  resources: {}
  volume: {}
  #  name: pvc-transmission-config
  #  storageClassName: longhorn
  #  annotations: {}
  #  labels: {}
  #  accessModes: ReadWriteOnce
  #  storage: 5Gi
  #  selector: {}

sabnzbd:
  enabled: true
  container:
    image: docker.io/linuxserver/sabnzbd
    nodeSelector: {}
    port:
      http: 8080
      https: 9090
  service:
    http:
      type: LoadBalancer
      targetPort: 80
      port: 8080
      nodePort:
      # Defines an additional LB service, requires cloud provider service or MetalLB
      extraLBService: false
      extraLBAnnotations: {}
    https:
      type: LoadBalancer
      targetPort: 9090
      port: 9090
      nodePort:
      # Defines an additional LB service, requires cloud provider service or MetalLB
      extraLBService: false
      extraLBAnnotations: {}
  ingress:
    enabled: false
    annotations: {}
    path: /sabnzbd
    tls:
      enabled: false
      secretName: ""
  resources: {}
  volume: {}
  #  name: pvc-plex-config
  #  storageClassName: longhorn
  #  annotations: {}
  #  labels: {}
  #  accessModes: ReadWriteOnce
  #  storage: 5Gi
  #  selector: {}

prowlarr:
  enabled: true
  container:
    image: docker.io/linuxserver/prowlarr
    tag: develop
    nodeSelector: {}
    port: 9696
  service:
    type: LoadBalancer
    targetPort: 80
    port: 9696
    nodePort:
    extraLBService: false
    extraLBAnnotations: {}
  ingress:
    enabled: false
    annotations: {}
    path: /prowlarr
    tls:
      enabled: false
      secretName: ""
  resources: {}
  volume: {}
  #  name: pvc-prowlarr-config
  #  storageClassName: longhorn
  #  annotations: {}
  #  labels: {}
  #  accessModes: ReadWriteOnce
  #  storage: 5Gi
  #  selector: {}

plex:
  enabled: true
  claim: "REDACTED"
  replicaCount: 1
  container:
    image: docker.io/linuxserver/plex
    nodeSelector: {}
    port: 32400
  service:
    type: LoadBalancer
    targetPort: 80
    port: 32400
    nodePort:
    # Defines an additional LB service, requires cloud provider service or MetalLB
    extraLBService: false
    extraLBAnnotations: {}
  ingress:
    enabled: false
    annotations: {}
    tls:
      enabled: false
      secretName: ""
  resources: {}
  #  limits:
  #    cpu: 100m
  #    memory: 100Mi
  #  requests:
  #    cpu: 100m
  #    memory: 100Mi
  volume: {}
  #  name: pvc-plex-config
  #  storageClassName: longhorn
  #  annotations: {}
  #  labels: {}
  #  accessModes: ReadWriteOnce
  #  storage: 5Gi
  #  selector: {}

Application file:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: k8s-mediaserver-operator
  namespace: argocd
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  destination:
    namespace: k8s-mediaserver-operator
    name: in-cluster
  project: default
  source:
    path: k8s-mediaserver-operator/helm
    repoURL: https://github.com/RyuunosukeDS3/argocd.git
    targetRevision: main
    helm:
      valueFiles:
        - values.yaml
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
@RyuunosukeDS3
Copy link
Author

The problem was caused by readiness check in plex and jackett. Removing those from the template fixed the problem.

@RyuunosukeDS3
Copy link
Author

Now we need to figure out if its a specific problem of mine, or if this is happening to everyone else.

@kubealex
Copy link
Owner

The automated tests check the readiness of each app, and each release is tested against the probes, what error did you get while you deployed it? @RyuunosukeDS3

@RyuunosukeDS3
Copy link
Author

It said it couldn't reach... Also ingress didn't work for me... So i ended up using loadbalancers for everything and now it works.

@autolyticus
Copy link

I have also faced sporadic issues with the readinessProbe when I was trying to deploy today (when the server hasn't been claimed) it seems to return 401 unauthorized, which is treated as a problem by Kubernetes, which marks this service as not ready. I'm not sure if readinessProbes allow you to do something about the return code, but 401 should not be treated as an error in this case I feel.

@vitofico
Copy link

Same problem here, I am deploying with ArgoCD and plex is the only service that is "unhealthy" as it returns a 401 error. I both tried with and without claim, facing the same issue.

@autolyticus
Copy link

As mentioned above, it's due to the misconfigured readinessProbes,
I ended up doing

helm template helm -n mediaserver --create-namespace -f values.yml > rendered.yml

And manually removing the readinessProbes from rendered.yml for the Plex deployment

@parkerbryan
Copy link

As mentioned above, it's due to the misconfigured readinessProbes, I ended up doing

helm template helm -n mediaserver --create-namespace -f values.yml > rendered.yml

And manually removing the readinessProbes from rendered.yml for the Plex deployment

I've been struggling with this and I suspect it has to do with #84 changing the readiness probe from a simple TCP connect to looking for an HTTP response, but Plex does not always give a 200 response, which is what it appears httpGet is looking for by default, and if it doesn't get that - even if Plex is working as designed - the readiness check fails and there is no ingress. Plex is still running and it can reach out to the internet, the Plex app on various devices can pull content and transcode/stream it, but I am unable to reach the Plex instance directly unless I proxy through an SSH tunnel into the internal cluster network.

I think this HTTP readiness check, at least in the case of Plex, is too aggressive.

@kubealex
Copy link
Owner

Thanks for the great feedback, that's definitely something I'll need.
I'll take the checks offline and see if I can end up with a consistent way of testing (that's the main point) during the integration tests without affecting the deployment.

@kubealex
Copy link
Owner

I'm not able to reproduce this, even doing all the steps manually that's what I get, the readiness succeeds, and I am able to end the configuration.

mediaserver                       jackett-8676c88788-kq6kp                                       1/1     Running     0             111s
mediaserver                       plex-55c9677c47-ftkfw                                          1/1     Running     0             111s
mediaserver                       prowlarr-798ccd854-nmxw6                                       1/1     Running     0             111s
mediaserver                       radarr-6dd774c49d-bd8sl                                        1/1     Running     0             111s
mediaserver                       sabnzbd-566b78d7cd-xcgws                                       1/1     Running     0             111s
mediaserver                       sonarr-77d7449699-cvqbf                                        1/1     Running     0             111s
mediaserver                       transmission-f995f9769-kzm4d                                   1/1     Running     0             111s

@parkerbryan
Copy link

I'm not able to reproduce this, even doing all the steps manually that's what I get, the readiness succeeds, and I am able to end the configuration.

mediaserver                       jackett-8676c88788-kq6kp                                       1/1     Running     0             111s
mediaserver                       plex-55c9677c47-ftkfw                                          1/1     Running     0             111s
mediaserver                       prowlarr-798ccd854-nmxw6                                       1/1     Running     0             111s
mediaserver                       radarr-6dd774c49d-bd8sl                                        1/1     Running     0             111s
mediaserver                       sabnzbd-566b78d7cd-xcgws                                       1/1     Running     0             111s
mediaserver                       sonarr-77d7449699-cvqbf                                        1/1     Running     0             111s
mediaserver                       transmission-f995f9769-kzm4d                                   1/1     Running     0             111s

I have a local NAS where the config is stored, storage is good. It seems like when I rm that entire config directory and Plex starts anew, it starts up and gives a 200, goes green in Rancher, good ingress. Readiness checks are always good with a fresh virgin config. After that, response codes change, I think it's doing a 302 Found and redirecting to plex.tv, and the readiness check doesn't like that. Only way to get the readiness check to succeed is when it starts fresh with no config.

@InputObject2
Copy link
Contributor

Even with a brand new config/pvc and token I was not able to get Plex to return anything other than the 401 to the probe on /.

When setting the probe as /web  instead, this works fine, maybe we could use this instead.

@kubealex
Copy link
Owner

fixed by #102
Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants