Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Intel MPI #389

Merged
merged 2 commits into from
Aug 3, 2021
Merged

Conversation

alculquicondor
Copy link
Collaborator

@alculquicondor alculquicondor commented Jul 28, 2021

  • Adds the field .spec.mpiImplementation, defaults to OpenMPI
  • The Intel implementation requires a Service fronting the launcher
  • Passing the number of slots through environment variable instead of hostfile (some versions of Intel MPI ignore the slots defined in the hostfile).
  • Add an example that uses Intel MPI

Intel MPI is very flaky at startup, as opposed to OpenMPI. In particular, it won't retry connections to workers if they are the hostnames are not resolvable. The entrypoint handles this.

Copy link
Member

@terrytangyuan terrytangyuan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Others who have experience in this please help review this one.

@alculquicondor
Copy link
Collaborator Author

cc @kawych

@gaocegege
Copy link
Member

cc @zidarko

Copy link
Member

@gaocegege gaocegege left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@google-oss-robot google-oss-robot requested a review from carmark July 29, 2021 02:32
Copy link

@kawych kawych left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Would be great if you could document what Intel MPI versions was it tested with.

func (c *MPIJobController) getOrCreateWorkersService(mpiJob *kubeflow.MPIJob) (*corev1.Service, error) {
svc, err := c.serviceLister.Services(mpiJob.Namespace).Get(mpiJob.Name + workerSuffix)
func (c *MPIJobController) getOrCreateWorkersService(job *kubeflow.MPIJob) (*corev1.Service, error) {
return c.getOrCreateService(job, newWorkersService)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you simplify the code by creating the service here and removing the newWorkersService() factory function? And same with the lanucher?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm getting rid of the factory in the getOrCreateService function. But I have to keep the newWorkersService because it's used for tests.

source $set_intel_vars
fi

function resolve_host() {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this still can cause issues sometimes, i.e. sometimes there is a window when the launcher is able to resolve it's own hostname, but workers are not. This happens to me usually in the second run if I schedule two runs in a row.

It should be fine though, since this is only an example

Copy link
Collaborator Author

@alculquicondor alculquicondor Jul 29, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure that was the problem?
When I only had the check for the launcher, I was getting flaky startups. Now that I have checks for all the workers as well, the job starts every time. I couldn't really debug what was going on with just the check for the launcher, as Hydra doesn't log the output of the ssh calls :(

Adds the field .spec.mpiImplementation, defaults to OpenMPI

The Intel implementation requires a Service fronting the launcher.
@alculquicondor
Copy link
Collaborator Author

Any other suggestions?

@gaocegege
Copy link
Member

LGTM 👍
/lgtm

@alculquicondor
Copy link
Collaborator Author

/assign @terrytangyuan
for approval

Copy link
Member

@terrytangyuan terrytangyuan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

/lgtm
/approve

@google-oss-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: kawych, terrytangyuan

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants