Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data mover micro service design #7576

Merged

Conversation

Lyndon-Li
Copy link
Contributor

@Lyndon-Li Lyndon-Li commented Mar 28, 2024

Fix issue #7198. Add the design for data mover micro service

Copy link

codecov bot commented Mar 28, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 58.79%. Comparing base (24941b4) to head (544d796).
Report is 262 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #7576      +/-   ##
==========================================
- Coverage   61.70%   58.79%   -2.91%     
==========================================
  Files         263      345      +82     
  Lines       28861    28766      -95     
==========================================
- Hits        17808    16914     -894     
- Misses       9793    10423     +630     
- Partials     1260     1429     +169     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@Lyndon-Li Lyndon-Li force-pushed the data-mover-micro-service-design branch from adc1979 to 2264d56 Compare March 28, 2024 08:37
design/vgdp-micro-service/vgdp-micro-service.md Outdated Show resolved Hide resolved
design/vgdp-micro-service/vgdp-micro-service.md Outdated Show resolved Hide resolved
design/vgdp-micro-service/vgdp-micro-service.md Outdated Show resolved Hide resolved
design/vgdp-micro-service/vgdp-micro-service.md Outdated Show resolved Hide resolved

In order to have the same capability and permission with node-agent, below pod configurations are inherited from node-agent and set to backupPod/restorePod's spec:
- Volumes: Some configMaps will be mapped as volumes to node-agent, so we add the same volumes of node-agent to the backupPod/restorePod
- Environment Variables
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If time permits, we should follow the best practice to limit/minimize the scope and capability of the backupPod/restorePod.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, theoretically, the backupPod/restorePod only needs to host the VGDP instance. And besides that, we also need a kube client so as to fulfill below communicating and controlling purposes:

  • Watch some status of DUCR/DDCR for synchronization with the controller, i.e., ready for start VGDP; VGDP cancellation
  • Deliver information to the controller, i.e., VGDP progress

There is no other functionalities in backupPod/restorePod

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We may open an issue to limit the scope of the roles of the backupPod/restorePod but it can be optional for phase 1 implementation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a clarification for the roles/privileges/capabilities of the backupPod/restorePod.
We can open an issue for further changes after this micro service feature is merged.

design/vgdp-micro-service/vgdp-micro-service.md Outdated Show resolved Hide resolved
design/vgdp-micro-service/vgdp-micro-service.md Outdated Show resolved Hide resolved
Copy link
Contributor

@shawn-hurley shawn-hurley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see this, but it feels like a perfect thing to at least think through how a third-party datamover could now plugin into this system.

@Lyndon-Li
Copy link
Contributor Author

I don't see this, but it feels like a perfect thing to at least think through how a third-party datamover could now plugin into this system.

The scope of this design is for VGDP, VGDP is a part of VBDM (Velero Build-in Data Mover), so this design doesn't cover 3rd data movers.
3rd data movers still follow this section in the Volume Snapshot Data Movement design. Specifically, 3rd data movers handles the DUCR/DDCR and they are free to make their data movers micro service style or monolith service style.

I can add a clarification for this in the design.

@Lyndon-Li Lyndon-Li force-pushed the data-mover-micro-service-design branch from 2264d56 to 3527da1 Compare June 4, 2024 10:22
@shawn-hurley
Copy link
Contributor

So my thought process was:

We could use a generic controller to handle the DUCR/DDCR and then move all the data movers to micro services and have the generic controller be able to control what data mover is called when.

The biggest problem with the design at the moment is that I can only have a single Data Mover. I have to copy the full VBDM code and extend it for my use case (a particular volume for a particular driver).

Maybe this isn't how you want to solve that problem, and that is ok, but it is an option.

I would like to see us think through a solution to this problem though.

@Lyndon-Li
Copy link
Contributor Author

So my thought process was:

We could use a generic controller to handle the DUCR/DDCR and then move all the data movers to micro services and have the generic controller be able to control what data mover is called when.

The biggest problem with the design at the moment is that I can only have a single Data Mover. I have to copy the full VBDM code and extend it for my use case (a particular volume for a particular driver).

Maybe this isn't how you want to solve that problem, and that is ok, but it is an option.

I would like to see us think through a solution to this problem though.

I think this is possible theoretically, though we still need to cover lots of details, e.g., how do we publish the communication mechanism so that 3rd data movers could implement.
However, the biggest problem here is this conflicts with the existing 3rd data mover integration mechanism that has already been defined in the Volume Snapshot Data Movement design --- the 3rd data mover is integrated through DUCR/DDCR, which is simple and clear.
Therefore, without enough motivation, we don't want to modify the original design. And the benefit I can see for your proposal is that 3rd data mover authors don't need to implement the DUCR/DDCR controllers, but I don't think that is motivative enough. Moreover, I am not sure if all the 3rd data mover authors want this.

Anyway, this is out of the scope of the current design. I suggest to open another issue after this design is merged and we can continue this discussion in the issue.

@Lyndon-Li Lyndon-Li force-pushed the data-mover-micro-service-design branch 5 times, most recently from 18f2382 to 4b8f92d Compare June 11, 2024 05:48
@weshayutin
Copy link
Contributor

Is this design a candidate for v1.15?

@Lyndon-Li
Copy link
Contributor Author

Is this design a candidate for v1.15?

Yes


We use two mechanism to make the watch:
**Pod Phases**: VGDP Watcher watches the backupPod/restorePod's phases updated by Kubernetes. That is, VGDP Watcher creates an informer to watch the pod resource for the backupPod/restorePod and detect that the pod reaches to one of the terminated phases (i.e., PodSucceeded, PodFailed). We also check the availability & status of the backupPod/restorePod at the beginning of the watch so as to detect the starting of the backupPod/restorePod.
**Custom Kubernetes Events**: VGDP-MS generates Kubernetes events and associates them to the DUCR/DDCR at the time of VGDP instance starting/stopping and progress update, then VGDP Watcher creates another informer to watch the Event resource associated to the DUCR/DDCR.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sseago @shawn-hurley Any comments on this approach? Essentially we intend to use k8s events as a way for communication from the DM pod to node-agent.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when we say events, we are suggesting to create events.k8s.io/v1 objects tied to the DUCR/DDCR.

I would worry about this from the watcher if it is taking actions based on the events from docs

Events should be treated as informative, best-effort, supplemental data.

Can VGDP-MS also update the status of the DDCR/DUCR? sorry if that is a silly question. Adding this to my queue to look at more closely.

Copy link
Contributor Author

@Lyndon-Li Lyndon-Li Jun 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Events should be treated as informative, best-effort, supplemental data.

Yes, we are using events.k8s.io/v1 here by sticking to this principle. Specifically:

  • Events never deliver decisive information. E.g., the state transitions, i.e., data mover starts/stops/cancelled/crashed, are done by monitoring backupPod/restorePod's status.
  • Events are only used to deliver auxiliary information, i.e., logs for the operation of data movements, progress of the data movements, etc; events also hep to deliver the cross check information so that we know more what happened for troubleshooting of corner cases.

However, events are not unnecessary, as explained in the Dual mode event watch section.

Copy link
Contributor Author

@Lyndon-Li Lyndon-Li Jun 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can VGDP-MS also update the status of the DDCR/DUCR?

We have excluded this option from the very beginning. This requires dedicate conflict (when updating DUCR/DDCR) resolution, which makes the data mover's workflow complicated and fragmented.
Therefore, here we follow another principle --- every update to the DDCR/DUCR must be done by the controllers.

@Lyndon-Li Lyndon-Li force-pushed the data-mover-micro-service-design branch 2 times, most recently from 39757fb to 4fb4c95 Compare June 24, 2024 05:06
This log redirecting mechanism is thread safe since the hook acquires the write lock before writing the log buffer, so it guarantees that in the node-agent log there is no corruptions after redirecting the log, and the redirected logs and the original node-agent logs are not projected into each other.

## node-agent
node-agent is still required. Even though VGDP is now not running inside node-agent, node-agent still hosts the data mover controller which reconciles DUCR/DDCR and operates DUCR/DDCR in other steps before the VGDP instance is started, i.e., Accept, Expose, etc.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the current design, the reason DUCR/DDCR is reconciled in the node agent (and not the velero pod) is because processing the DU/DD required host path access, so it needed to be on the same node as the mounting pod. If we're doing all of the kopia work from the backup/restorePod, then wouldn't it make more sense to move the DU/DD reconcilers to the main velero server?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we did this, then node agent would remain privileged always, but if you're using datamover without fs-backup, then you wouldn't need to enable the node agent.

Copy link
Contributor Author

@Lyndon-Li Lyndon-Li Jun 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moving the DUCR/DDCR controller to the backupPod/restorePod is feasible technically, but we didn't choose this for below reasons:

  1. Making the DUCR/DDCR controller outside of the backup/restorePod helps to better control the data movement workflow. We have one part in one module and the other part in the other module, this is something like the concept of "multiple failure domains", so when one part fails, we always get some way to be notified and update the DUCR/DDCR
  2. Since we want the DUCR/DDCR controller outside of the backup/restorePod, we have two choices --- velero server and node-agent server. We didn't choose the velero server primarily because of the isolation, that is, we want to isolate the control path and the data path; we also want to isolate the resource backup and the volume data backup
  3. As another concept we are following, we want to run the as minimal part into the backupPod/restorePod as possible, so in the design we actually call it VGDP micro service, that is we only run the VGDP in the backupPod/restorePod. That also adheres to the concept of micro service
  4. By keeping everything else in node-agent, we don't bing impact to the existing data mover features, i.e., node concurrency

Copy link
Collaborator

@shubham-pampattiwar shubham-pampattiwar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While running VGDP instances as dedicated pods offers better resource control, it may introduce scalability challenges, particularly in large clusters with many backup/restore operations. Each operation will now spawn a new pod, which could lead to a significant increase in resource consumption and potential scheduling delays. Should the design include something on implementing resource quotas and limits specific to VGDP pods to prevent overconsumption of cluster resources ?

design/vgdp-micro-service/vgdp-micro-service.md Outdated Show resolved Hide resolved
design/vgdp-micro-service/vgdp-micro-service.md Outdated Show resolved Hide resolved
@Lyndon-Li Lyndon-Li force-pushed the data-mover-micro-service-design branch from 4fb4c95 to 213589a Compare June 27, 2024 02:41
@Lyndon-Li
Copy link
Contributor Author

Lyndon-Li commented Jun 27, 2024

While running VGDP instances as dedicated pods offers better resource control, it may introduce scalability challenges, particularly in large clusters with many backup/restore operations. Each operation will now spawn a new pod, which could lead to a significant increase in resource consumption and potential scheduling delays. Should the design include something on implementing resource quotas and limits specific to VGDP pods to prevent overconsumption of cluster resources ?

Actually, we didn't increase the allocation of pod resources in this design. With the old design, we still create one backupPod/restorePod for each volume, merely, we run dummy command in the pods. And the pods keep the same lifecycle before or after this design.
And we already have the control -- we can use the existing node-agent concurrency feature to control how many VGDP instances running in one node (configurable and default value is 1), this still works for VGDP micro service.
Noticeably, this is to control by nodes, at least 1 VGDP in a node. But I think that should be enough, if a cluster has thousands of nodes, I believe the API server/etcd should have already scaled out and so is powerful enough to handle several pods in each node.

This is a good point to clarify in the design, I will add it.

@Lyndon-Li Lyndon-Li force-pushed the data-mover-micro-service-design branch from 213589a to 544d796 Compare June 27, 2024 03:29
@Lyndon-Li
Copy link
Contributor Author

Done all the comments, please continue to review.

@Lyndon-Li
Copy link
Contributor Author

@sseago @shubham-pampattiwar Any more comments on this PR? Once you finish the review, please help to approve it.

@shawn-hurley
Copy link
Contributor

All good from my perspective.

I do still think that this microservice approach, gives us a chance to extend this design and make the Datamovers more pluggable. I think that this would be a huge win but makes sense as a follow on

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants