Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Startup Semaphore #606

Open
jonwinton opened this issue Sep 4, 2018 · 2 comments · May be fixed by #664
Open

Startup Semaphore #606

jonwinton opened this issue Sep 4, 2018 · 2 comments · May be fixed by #664
Assignees

Comments

@jonwinton
Copy link
Contributor

Quick semaphore link

Problem

In a production environment there are generally multiple Clay instances running at one time, whether it be multiple servers/containers or even a cluster on a single instance. At startup time Clay bootstraps data for components/sites, something we only need/want to run on one individual instance in a cluster/deployment group.

Proposal

Add semaphore support to the storage API, allowing whichever storage module is chosen to dictate how the data is stored. The data we might send across:

  • Which node has begun bootstrapping. Could be determined with the os module using a hostname/pid combination to generate the key.
  • Which deploy is being handled. Can we rely on the package.json version of the clay instance? Multiple environments are not always going to see this increment
  • The state of the startup process: Not started, running, finished, failed? 0, 1, 2, 3?

Questions

  • What happens if the startup process fails and needs to be restarted?
  • Should plugins be able to use the same semaphore data to determine how they startup? For example, amphora-search does not ALWAYS need to check for the presence of indices on every Clay instance.
@amelvisfranco
Copy link
Contributor

Semaphora

1 similar comment
@amelvisfranco
Copy link
Contributor

Semaphora

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants