-
Notifications
You must be signed in to change notification settings - Fork 864
infrastructure
Open MPI's hosting infrastructure spans three main locations:
- New Mexico Consortium
- HostGator
- Amazon Web Services (AWS)
- Github
The following sections describe various pieces of infrastructure that are used by the Open MPI project.
We use Google for three purposes:
- Domain registration
- DNS hosting
- File sharing (Google Drive)
The NMC hosts Open MPI's mailman mailing lists (on the vhost lists.open-mpi.org). The main point of contact for the NMC is through Los Alamos: Howard Pritchard (it is LANL's relationship with NMC that allows us to utilize their mailman server).
www.open-mpi.org (and friends) are hosted on HostGator. Jeff Squyres ([email protected]) and Ralph Castain ([email protected]) are the two primary points of contact in the Open MPI team for access to the HostGator account.
AWS hosts the MTT, trac archives, Open MPI Jenkins server, nightly build server, and github hooks execution. Brian Barrett ([email protected]) is the primary point of contact for account access. There is some documentation available.
For much of Open MPI's history, Indiana University hosted much of our infrastructure. Open MPI migrated away from IU in 2016, but we want to thank IU and especially DongInn Kim, who supported us for all those years!
All Github hosting is done under the Open MPI community (https://github.com/open-mpi/).
The Open MPI community owns several domain names, currently registered and DNS-hosted in Google Domains under the user [email protected]:
- open-mpi.org
- open-mpi.com
- open-mpi.net
- openmpi.org
- openmpi.com
- openmpi.net
The first one (open-mpi.org) is the main domain that we use for everything. It contains many sub-names (mtt, www, lists, various CNAME and TXT names, ...etc.).
The other domains solely exist for web redirects (also conveniently hosted at Google) to https://www.open-mpi.org/.
There is a shared folder hosted by [email protected] that contains all the scans of signed Open MPI 3rd party contribution agreements (from before we started accepting code with a Signed-off-by token in the comment message).
We also put whatever other shared documents are useful in that folder, including (but not limited to):
- Financial accounting spreadsheet
- Various Google Form surveys
- ...etc.
This shared folder is shared out to a small number of community members's Google accounts so that they can add/edit/delete files without needing to login to [email protected].
This file-sharing mechanism is generally intended for administrative purposes. It is not generally used for real-time file sharing, code development, etc.
The web server for the main Open MPI web site (www.open-mpi.org) is at Host Gator.
- The bulk of the content for this web site is maintained in a github git repository (https://github.com/open-mpi/ompi-www).
- Some content comes from scripts at IU that archive all mails sent to Open MPI mailing lists. These files are not in the github repo, and do not appear on the mirror sites. See the "Mailing list hosting" section, below, for more details. The mailing list archive section of the web site has some CGI scripts for searching the web archives, too. That was setup by IU/!DongInn; I don't know the details of how it works.
- Other content comes from the nightly snapshot tarballs that are generated locally, and do not appear on the mirror sites. See the "Official tarball generation" section, below, for more details.
Other than that, it's a pretty straight-up PHP web site. The PHP is written very much as C coders write PHP -- it doesn't use any fancy schmancy modern PHP frameworks or jquery or anything like that. It's pretty plain vanilla stuff, but with a bunch of helper PHP subroutines for mundane / common / repeated stuff (like the papers section and the FAQ and the tarball listings and ...).
To make sure that old links in old emails, commit messages, and comments do not go stale, we still run Open MPI's old Trac instance at HostGator at https://svn.open-mpi.org/.
The data in this Trac does not change any more -- it's read-only. It's only purpose in life is to prevent a decade of old links in emails/commits/comments from becoming stale.
There is quite a lot of historical data in this Trac instance.
May 2017 update: It's quite possible that we'll be putting the main Open MPI web site behind a CDN, and therefore won't need the mirroring program any more. Stay tuned.
Open MPI's web mirrors keep in sync with the upstream web site via git (i.e., they "git pull" from github via cron) or rsync. IU enables rsync on lion.crest.iu.edu (i.e., www.open-mpi.org) with the following configuration:
-bash-4.1$ hostname
lion.crest.iu.edu
-bash-4.1$ cat /etc/rsyncd.conf
motd file = /etc/rsyncd.motd
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
lock file = /var/run/rsyncd.lck
[ompi_web]
path = /l/osl/www/www.open-mpi.org
exclude = . .svn svn-commit* *~ community/lists/*/ mtt/vis/ /members/ .git
use chroot = no
uid = apache
gid = apache
read only = yes
list = yes
As noted above, the archiving of mails from the Open MPI mailing lists and the nightly snapshot tarballs are not in the github repo, and therefore do not appear on the mirrors.
We have an "Open MPI" (open-mpi) organization on GitHub, which contains all of our members, repositories, webhooks, etc.
Current repos at github:
- https://github.com/open-mpi/hwloc: Main hwloc code repository
- https://github.com/open-mpi/hwloc-debian: Debian packaging for hwloc
- https://github.com/open-mpi/mtt: Main MTT code repository
- https://github.com/open-mpi/netloc: Main netloc code repository
- https://github.com/open-mpi/docs: User-level documentation project (low activity)
- https://github.com/open-mpi/ompi-www: Open MPI web site
NMC has setup mailman for many Open MPI mailing lists. Each mailman is similarly configured, but they're all slightly different. Some lists are "broadcast only" (e.g., announce), while others are discussion lists (i.e., any subscribed member can post).
No lists allow posting from unsubscribed members. Periodically, someone complains about this, but we have never changed this policy, and it has effectively eliminated spam from the lists.
We use the Internet Mail Archive for web hosting of all of our mailing lists (they used to be hosted at Indiana University, but we moved them to the Mail Archive when IU left the Open MPI community).
For example: https://www.mail-archive.com/[email protected]/
To get this archiving, we simply subscribe [email protected] to the mailing list.
Our main contact with the good people at the Mail Archive is [email protected] (which is, itself, a mailing list).
Many, many URLs of web-archived messages -- from the users/devel lists, in particular -- have been used in SVN commit messages, Trac messages, FAQ entries, and other random places around the web. DO NOT CHANGE THE URLS OF WEB ARCHIVED MESSAGES IF AT ALL POSSIBLE!
Prior to using the Mail Arhicve, Indiana University made web archvies of all of Open MPI's mailing lists. Through some Apache and scripting magic, the archives appeared in Open MPI's main web site navigation.
When we moved away from Indiana U, we essentially re-generated the PHP for all of these messages into fixed files in the ompi-www
repository, along with an appropriate header that that particular web archive is frozen, and provide a link to the new archive (although not to the individual message in the new archive).
Example: https://www.open-mpi.org/community/lists/users/2005/11/0391.php
Nothing on the main web site links to these old posts any more, but they are still included in the web site repo so that a decade of history (i.e., links to these old posts) don't become stale.
Two types of tarballs are created on aws.open-mpi.org:
- Automated nightly snapshot tarballs (via cron).
- Manual release tarballs.
Both types of tarballs require specific tuples of the GNU Autotools (see http://www.open-mpi.org/svn/building.php for specific details). On aws, there are installations for all the relevant tuple combinations. Environment modulefiles are used to select the right tuple for a given build (e.g., "module load autotools/ompi-v1.8" will load the Right Autotools tuple to build Open MPI 1.8.x tarballs).
The "mpiteam" user on aws is used to make both types of tarballs.
Note that the "make dist" process is used to make both types of tarballs, and is highly influenced by the VERSION file in the SVN branch that is being built (e.g., source: trunk/VERSION). The VERSION file will affect the version number of the tarball and all the shared library version numbers.
The source: trunk/contrib/nightly/openmpi-nightly-tarball.sh script is fired under the mpiteam users via cron to generate nightly tarballs. It assumes a specific directory structure and environment module setup (e.g., the ability to module load autotools/ompi-VERSION). It also assumes a specific output directory format -- the output tarballs are placed in the docroot of the www.open-mpi.org web site in the directory for the nightly snapshot tarballs of the relevant series (e.g., /l/osl/www/www.open-mpi.org/nightly/v1.8).
Manual tarballs are created for Open MPI formal releases via the source: trunk/contrib/nightly/openmpi-release.sh script. This is almost always run interactively by a human. It deposits the tarballs that it creates in $HOME/openmpi/release/
.
The MPI Testing Tool (MTT) is comprised of 2 hosted portions, both of which are hosted by AWS:
- Postgres database containing all the MTT data.
- Web submission / reporter
The database is in Postgres because, at the time, MySQL did not support table partitioning, which is absolutely essential to MTT reporter performance. MySQL now supports such features, so we may be able to move back to MySQL someday (if someone wants to migrate the code away from Postgres-flavored-SQL to MySQL-flavored-SQL).
The web submission / reporter piece is basically PHP code along with a cron job to slurp in new submission data into the Postgres database.
There was work in the 2013-2014 school year at U. New Mexico to re-write the MTT reporter from scratch and use a much more modern and extensible interface. This work was completed, but needs additional integration and scale testing before it can replace the current reporter. Hopefully, that can occur over the next 12 months (it's mainly an issue of finding someone to actually do this work).