Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[openVSXProxy] Mark the usage of PVC optional #14603

Merged
merged 1 commit into from
Nov 21, 2022

Conversation

Pothulapati
Copy link
Contributor

Description

Currently, OpenVSXProxy is the only non-optional component that needs PVC's to get things working. The usage of a PVC in AWS reference architecture, means that components can't get restarted in a different zone anymore as PVC's with EBS are specific to a zone.

As EKS reference architecture does not enable any other component that uses PVC's (i.e minio), This is a problem specific to OpenVSXProxy.

By disabling this, OpenVSXProxy's redis now will have lost the cache once a restart occurs which does not seems like a big deal for self-hosted customers.

Signed-off-by: Tarun Pothulapati [email protected]

Related Issue(s)

Fixes #14529

How to test

Release Notes

[openVSXProxy] Mark the usage of PVC optional

Documentation

Werft options:

  • /werft with-local-preview
    If enabled this will build install/preview
  • /werft with-preview
  • /werft with-large-vm
  • /werft with-integration-tests=all
    Valid options are all, workspace, webapp, ide, jetbrains, vscode, ssh

@Pothulapati
Copy link
Contributor Author

/hold as we need to either update ops or switch the condition to be enabled by default

@Pothulapati
Copy link
Contributor Author

Pothulapati commented Nov 11, 2022

/werft run

👍 started the job as gitpod-build-tar-openvsxproxy-pvc-optional.1
(with .werft/ from main)

@Pothulapati Pothulapati marked this pull request as ready for review November 11, 2022 12:24
@Pothulapati Pothulapati requested review from a team November 11, 2022 12:24
@iQQBot
Copy link
Contributor

iQQBot commented Nov 11, 2022

}, {
Name: "redis-data",
MountPath: "/data",
}},

Is it ok the pod has mount but don't have pvc or volumes ?

@Pothulapati Pothulapati force-pushed the tar/openvsxproxy-pvc-optional branch from 8802661 to e2cb152 Compare November 11, 2022 12:46
@roboquat roboquat added size/M and removed size/S labels Nov 11, 2022
@Pothulapati
Copy link
Contributor Author

Pothulapati commented Nov 11, 2022

@iQQBot You are right, We need a emptyDir volume which I just added! 👀

@iQQBot
Copy link
Contributor

iQQBot commented Nov 11, 2022

change it to DisablePVC looks weird, it required user use experimental config.

@Pothulapati
Copy link
Contributor Author

Pothulapati commented Nov 11, 2022

@iQQBot Yes, Because this issue is specific to EKS and with GKE this seems to done automatically and chanding existing functionality did not seem like a great idea! 👀

@iQQBot
Copy link
Contributor

iQQBot commented Nov 11, 2022

@iQQBot Yes, Because this issue is specific to EKS and with GKE this seems to done automatically and chanding existing functionality did not seem like a great idea! 👀

What I mean is that users should not use experimental config, maybe move to the regular config?

@Pothulapati Pothulapati force-pushed the tar/openvsxproxy-pvc-optional branch from e2cb152 to f52d82b Compare November 15, 2022 10:23
@Pothulapati
Copy link
Contributor Author

@iQQBot Moved the config into OpenVSX.Proxy.DisablePVC instead of being in experimental

Copy link
Contributor

@iQQBot iQQBot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@mrsimonemms
Copy link
Contributor

Have you tried disabling the PVC from an existing installation? I recall we had lots of issues when amending a PVC on the in-cluster storage (I think) which is why we never made it any bigger - wondered if removing the PVC causes any issues

@mrsimonemms
Copy link
Contributor

mrsimonemms commented Nov 15, 2022

/werft run publish-to-kots

👍 started the job as gitpod-build-tar-openvsxproxy-pvc-optional.4
(with .werft/ from main)

Copy link
Contributor

@mrsimonemms mrsimonemms left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/hold

Left a comment that I would prefer if it were reviewed, but happy for you to merge if you don't think it necessary/relevant

Fixes #14529

Currently, OpenVSXProxy is the only non-optional component
that needs PVC's to get things working. The usage of a PVC
in AWS reference architecture, means that components can't
get restarted in a different zone anymore as PVC's with EBS
are specific to a zone.

As EKS reference architecture does not enable any other
component that uses PVC's (i.e minio), This is a problem
specific to `OpenVSXProxy`.

By disabling this, OpenVSXProxy's redis now will have
lost the cache once a restart occurs which does not
seems like a big deal for self-hosted customers.

Signed-off-by: Tarun Pothulapati <[email protected]>
@Pothulapati Pothulapati force-pushed the tar/openvsxproxy-pvc-optional branch from f52d82b to ba95953 Compare November 16, 2022 07:20
@Pothulapati
Copy link
Contributor Author

Pothulapati commented Nov 16, 2022

I recall we had lots of issues when amending a PVC on the in-cluster storage (I think) which is why we never made it any bigger - wondered if removing the PVC causes any issues

@mrsimonemms I haven't tested it on changing an existing cluster but considering that its just a statefulset that won't have a PVC anymore, This shouldn't be complex is my guess. 🤔

@Pothulapati
Copy link
Contributor Author

/unhold

@roboquat roboquat merged commit 2763de8 into main Nov 21, 2022
@roboquat roboquat deleted the tar/openvsxproxy-pvc-optional branch November 21, 2022 07:04
@roboquat roboquat added the deployed: IDE IDE change is running in production label Nov 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Scaling Up/Down Service Nodes in EKS Causes Problems With Pods That Use Persistent Volumes
4 participants