Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[web/task split] add topology constraints for each deployment #1234

Merged

Conversation

thedoubl3j
Copy link
Member

SUMMARY

Fixes a portion of #1182.

Add the ability for users to add topology spread constraints across the web and task deployments or set a specific one for the whole deployment

ISSUE TYPE
  • New or Enhanced Feature
TESTING

Expected results will be similar in how the previous iterations of adding these types of features go in that there is the default topology_spread_constraints key which will apply to the whole deployment and then can be overwritten by web_topology_spread_constraints or task_topology_spread_constraints for their respective deployments

  • test case 1: specify default behavior of apply topology_spread_constraints for the whole deployment
    CRD Change
 topology_spread_constraints: |
   - maxSkew: 100
     topologyKey: "topology.kubernetes.io/zone"
     whenUnsatisfiable: "ScheduleAnyway"
     labelSelector:
       matchLabels:
         app.kubernetes.io/part-of: 'awx'

result is toplogySpreadConstraints is applied to both deployments and set the same

➜  awx-operator git:(add_topology_spread) kubectl edit deployment awx-web 
...
      serviceAccountName: awx
      terminationGracePeriodSeconds: 30
      topologySpreadConstraints:
      - labelSelector:
          matchLabels:
            app.kubernetes.io/part-of: awx
        maxSkew: 100
        topologyKey: topology.kubernetes.io/zone
        whenUnsatisfiable: ScheduleAnyway
➜  awx-operator git:(add_topology_spread) kubectl edit deployment awx-task 
      serviceAccountName: awx
      terminationGracePeriodSeconds: 30
      topologySpreadConstraints:
      - labelSelector:
          matchLabels:
            app.kubernetes.io/part-of: awx
        maxSkew: 100
        topologyKey: topology.kubernetes.io/zone
        whenUnsatisfiable: ScheduleAnyway
  • test case 2: apply a web_topology_spread_constraints to change the web deployment but leave the task deployment the same as above in tc1
    CRD Change
  topology_spread_constraints: |
    - maxSkew: 100
      topologyKey: "topology.kubernetes.io/zone"
      whenUnsatisfiable: "ScheduleAnyway"
      labelSelector:
        matchLabels:
          app.kubernetes.io/part-of: 'awx'
  web_topology_spread_constraints: |
    - maxSkew: 100
      topologyKey: "topology.kubernetes.io/zone"
      whenUnsatisfiable: "ScheduleAnyway"
      labelSelector:
        matchLabels:
          app.kubernetes.io/name: 'awx-web'

result was awx-web deployment changed but task stayed the same

➜  awx-operator git:(add_topology_spread) kubectl edit deployment awx-web  
terminationGracePeriodSeconds: 30
      topologySpreadConstraints:
      - labelSelector:
          matchLabels:
            app.kubernetes.io/name: awx-web
        maxSkew: 100
        topologyKey: topology.kubernetes.io/zone
        whenUnsatisfiable: ScheduleAnyway
➜  awx-operator git:(add_topology_spread) kubectl edit deployment awx-task
      terminationGracePeriodSeconds: 30
      topologySpreadConstraints:
      - labelSelector:
          matchLabels:
            app.kubernetes.io/part-of: awx
        maxSkew: 100
        topologyKey: topology.kubernetes.io/zone
        whenUnsatisfiable: ScheduleAnyway
  • test case 3: apply a task_topology_spread_constraints to change the task deployment while leaving the web deployment constraint in place (aka same as tc2)
    CRD Change
  task_topology_spread_constraints: |
    - maxSkew: 100
      topologyKey: "topology.kubernetes.io/zone"
      whenUnsatisfiable: "ScheduleAnyway"
      labelSelector:
        matchLabels:
          app.kubernetes.io/name: 'awx-task'
  web_topology_spread_constraints: |
    - maxSkew: 100
      topologyKey: "topology.kubernetes.io/zone"
      whenUnsatisfiable: "ScheduleAnyway"
      labelSelector:
        matchLabels:
          app.kubernetes.io/name: 'awx-web'

result is the task deployment's previous spread constraint set in tc1 is overwritten with the new constraint and web remains the same

➜  awx-operator git:(add_topology_spread) kubectl edit deployment awx-task
      serviceAccountName: awx
      terminationGracePeriodSeconds: 30
      topologySpreadConstraints:
      - labelSelector:
          matchLabels:
            app.kubernetes.io/name: awx-task
        maxSkew: 100
        topologyKey: topology.kubernetes.io/zone
        whenUnsatisfiable: ScheduleAnyway
➜  awx-operator git:(add_topology_spread) kubectl edit deployment awx-web
      serviceAccount: awx
      serviceAccountName: awx
      terminationGracePeriodSeconds: 30
      topologySpreadConstraints:
      - labelSelector:
          matchLabels:
            app.kubernetes.io/name: awx-web
        maxSkew: 100
        topologyKey: topology.kubernetes.io/zone
        whenUnsatisfiable: ScheduleAnyway
  • test case 4: remove all constraints and reset back to default behavior
    CRD Change
    remove all previous topology_spread_constraints
    result is default behavior
➜  awx-operator git:(add_topology_spread) kubectl edit deployment awx-web
      securityContext: {}
      serviceAccount: awx
      serviceAccountName: awx
      terminationGracePeriodSeconds: 30
      volumes:
➜  awx-operator git:(add_topology_spread) kubectl edit deployment awx-task
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: awx
      serviceAccountName: awx
      terminationGracePeriodSeconds: 30

@thedoubl3j thedoubl3j changed the title add topology constraints for each deployment [web/task split] add topology constraints for each deployment Feb 13, 2023
@TheRealHaoLiu TheRealHaoLiu merged commit fc26a78 into ansible:feature_web-task-split Feb 13, 2023
@thedoubl3j thedoubl3j deleted the add_topology_spread branch February 13, 2023 18:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants