-
-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
infra.ci.jenkins.io on arm64
(controller and agents)
#3823
Comments
arm64
(controller and agents)
as per jenkins-infra/helpdesk#3823 --------- Co-authored-by: Damien Duportal <[email protected]> Co-authored-by: Hervé Le Meur <[email protected]>
as per jenkins-infra/helpdesk#3823 and to start using arm64 agents on arm64 nodepool in infra (privatek8s) and before being able to use the ALLINONEVERSION
|
Plan to migrate infra.ci.jenkins.io to arm64
++++++++++++++++++++++++++++
check to add :
Post-Mortem: we need to provide a PVC and PV matching the source disk :
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: disk.csi.azure.com
name: jenkins-infraci-snap
spec:
capacity:
storage: 64Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: managed-csi-premium-zrs-retain
csi:
driver: disk.csi.azure.com
volumeHandle: jenkins-infraci-snap
volumeAttributes:
fsType: ext4
note that the volumeHandle is shorten to the diskname not the full handle
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-infraci-snap
namespace: jenkins-infra
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 64Gi
volumeName: jenkins-infraci-snap
storageClassName: managed-csi-premium-zrs-retain
as it was getting a timeout we had to add this to the jenkins helmchart values :
The Migration part cannot be done by merging the PR as pretented above :
The PR still need to be merged not to introduced changes when infra will start successfully |
as per jenkins-infra/helpdesk#3823 (comment) create a new storage class on private to be used for ZRS multizone volumes, we need the volume to be accessible from both eastus2-1 for the arm64 nodes and eastus2-3 for our intel/amd nodes --------- Co-authored-by: Damien Duportal <[email protected]>
as per jenkins-infra/helpdesk#3823 (comment) --------- Co-authored-by: Damien Duportal <[email protected]>
update: infra.ci is now officially running on arm64. next steps: cleanup (next week) |
…h agents (new subnet) (#665) Related to jenkins-infra/helpdesk#3823 This PR follows up jenkins-infra/kubernetes-management#5126 It fixes the failure to spin up VM agents since the `arm64` migration: as the infra.ci.jenkins.io controller was moved to a new subnet in #658 Signed-off-by: Damien Duportal <[email protected]>
Update: fixing a few errors discovered after the controller migration:
=> Build queue is now empty \o/ |
(edited): issue tracking the release.ci's migration to arm64: #4042 |
CleanUp (infra.ci, weekly.ci, release.ci)
|
Service(s)
Azure, infra.ci.jenkins.io
Summary
as for the publick8s, we should create an ARM64 nodepool within the kubernetes privateK8S in order to migrate jenkins agent to arm64.
Also need to migrate infra.ci.jenkins.io to arm (with care about PV/PVC migration to the correct zone for arm zone 1)
Reproduction steps
No response
The text was updated successfully, but these errors were encountered: