-
Notifications
You must be signed in to change notification settings - Fork 220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for Intel MPI #389
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Others who have experience in this please help review this one.
cc @kawych |
cc @zidarko |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Would be great if you could document what Intel MPI versions was it tested with.
func (c *MPIJobController) getOrCreateWorkersService(mpiJob *kubeflow.MPIJob) (*corev1.Service, error) { | ||
svc, err := c.serviceLister.Services(mpiJob.Namespace).Get(mpiJob.Name + workerSuffix) | ||
func (c *MPIJobController) getOrCreateWorkersService(job *kubeflow.MPIJob) (*corev1.Service, error) { | ||
return c.getOrCreateService(job, newWorkersService) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you simplify the code by creating the service here and removing the newWorkersService() factory function? And same with the lanucher?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm getting rid of the factory in the getOrCreateService
function. But I have to keep the newWorkersService
because it's used for tests.
source $set_intel_vars | ||
fi | ||
|
||
function resolve_host() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this still can cause issues sometimes, i.e. sometimes there is a window when the launcher is able to resolve it's own hostname, but workers are not. This happens to me usually in the second run if I schedule two runs in a row.
It should be fine though, since this is only an example
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you sure that was the problem?
When I only had the check for the launcher, I was getting flaky startups. Now that I have checks for all the workers as well, the job starts every time. I couldn't really debug what was going on with just the check for the launcher, as Hydra doesn't log the output of the ssh calls :(
Adds the field .spec.mpiImplementation, defaults to OpenMPI The Intel implementation requires a Service fronting the launcher.
Any other suggestions? |
LGTM 👍 |
/assign @terrytangyuan |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: kawych, terrytangyuan The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
.spec.mpiImplementation
, defaults toOpenMPI
Intel MPI is very flaky at startup, as opposed to OpenMPI. In particular, it won't retry connections to workers if they are the hostnames are not resolvable. The entrypoint handles this.