Skip to content

Commit

Permalink
Merge branch 'master' into feature/node-gracefull-shutdown
Browse files Browse the repository at this point in the history
  • Loading branch information
Schnitzel authored Aug 28, 2019
2 parents cca1963 + 2500386 commit 2bac352
Show file tree
Hide file tree
Showing 15 changed files with 223 additions and 100 deletions.
4 changes: 2 additions & 2 deletions docs/using_lagoon/drupal/lagoonize.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

## 1. Lagoon Drupal Setting Files

In order for Drupal to work with Lagoon we need to teach Drupal about Lagoon and Lagoon about Drupal. This happens with copying specific YAML and PHP Files into your Git Repository.
In order for Drupal to work with Lagoon we need to teach Drupal about Lagoon and Lagoon about Drupal. This happens by copying specific YAML and PHP Files into your Git repository.

You find these Files [here](https://github.com/amazeeio/lagoon/tree/master/docs/using_lagoon/drupal). The easiest way is to download them as [ZIP File](https://minhaskamal.github.io/DownGit/#/home?url=https://github.com/amazeeio/lagoon/tree/master/docs/using_lagoon/drupal) and copy them into your Git Repository. For each Drupal Version and Database Type you will find an individual folder. A short overview of what they are:
You find [these Files in our GitHub repository](https://github.com/amazeeio/lagoon/tree/master/docs/using_lagoon/drupal); the easiest way is to [download these files as a ZIP file](https://minhaskamal.github.io/DownGit/#/home?url=https://github.com/amazeeio/lagoon/tree/master/docs/using_lagoon/drupal) and copy them into your Git repository. For each Drupal version and database type you will find an individual folder. A short overview of what they are:

- `.lagoon.yml` - The main file that will be used by Lagoon to understand what should be deployed and many more things. This file has some sensible Drupal defaults, if you would like to edit or modify, please check the specific [Documentation for .lagoon.yml](/using_lagoon/lagoon_yml.md)
- `docker-compose.yml`, `.dockerignore` and `Dockerfile.*` - These files are used to run your Local Drupal Development environment, they tell docker which services to start and how to build them. They contain sensible defaults and many commented lines, it should be pretty much self describing. If you would like to find out more, see [Documentation for docker-compose.yml]()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,9 @@ objects:
key: appuio.ch/autoscaled
operator: Equal
value: 'true'
- effect: NoSchedule
key: lagoon/build
operator: Exists
volumes:
- name: ${PERSISTENT_STORAGE_NAME}
persistentVolumeClaim:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,9 @@ objects:
key: appuio.ch/autoscaled
operator: Equal
value: 'true'
- effect: NoSchedule
key: lagoon/build
operator: Exists
volumes:
- name: lagoon-sshkey
secret:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,9 @@ objects:
key: appuio.ch/autoscaled
operator: Equal
value: 'true'
- effect: NoSchedule
key: lagoon/build
operator: Exists
volumes:
- name: ${SERVICE_NAME}
persistentVolumeClaim:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,9 @@ objects:
key: appuio.ch/autoscaled
operator: Equal
value: 'true'
- effect: NoSchedule
key: lagoon/build
operator: Exists
containers:
- image: ${SERVICE_IMAGE}
command:
Expand Down
4 changes: 2 additions & 2 deletions images/php/cli/61-php-xdebug-cli-env.sh
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/bin/sh

# Only if XDEBUG_ENABLE is set
if [ ${XDEBUG_ENABLE+x} ]; then
# Only if XDEBUG_ENABLE is not empty
if [ ! -z ${XDEBUG_ENABLE} ]; then
# XDEBUG_CONFIG is used by xdebug to decide if an xdebug session should be started in the CLI or not.
# The content doesn't really matter it just needs to be set, the actual connection details are loaded from /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini
export XDEBUG_CONFIG="idekey=lagoon"
Expand Down
6 changes: 3 additions & 3 deletions images/php/fpm/entrypoints/60-php-xdebug.sh
Original file line number Diff line number Diff line change
Expand Up @@ -25,12 +25,12 @@ get_dockerhost() {
return
}

# Only if XDEBUG_ENABLE is set
if [ ${XDEBUG_ENABLE+x} ]; then
# Only if XDEBUG_ENABLE is not empty
if [ ! -z ${XDEBUG_ENABLE} ]; then
# remove first line and all comments
sed -i '1d; s/;//' /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini
# add comment that explains how we have xdebug enabled
sed -i '1s/^/;xdebug enabled as XDEBUG_ENABLE is set, see \/lagoon\/entrypoints\/60-php-xdebug.sh \n/' /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini
sed -i '1s/^/;xdebug enabled as XDEBUG_ENABLE is not empty, see \/lagoon\/entrypoints\/60-php-xdebug.sh \n/' /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini

# Only if DOCKERHOST is not already set, allows to set a DOCKERHOST via environment variables
if [[ -z ${DOCKERHOST+x} ]]; then
Expand Down
1 change: 1 addition & 0 deletions node-packages/commons/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
"jsonwebtoken": "^8.0.1",
"kubernetes-client": "^3.15.0",
"lokka": "^1.7.0",
"node-fetch": "^2.6.0",
"ramda": "^0.25.0",
"winston": "^2.4.0",
"winston-logstash": "^0.4.0"
Expand Down
22 changes: 21 additions & 1 deletion node-packages/commons/src/api.js
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ import type {
} from './types';

const { Lokka } = require('lokka');
const { Transport } = require('@lagoon/lokka-transport-http');
const { Transport } = require('./lokka-transport-http-retry');
const R = require('ramda');
const { createJWTWithoutUserId } = require('./jwt');
const { logger } = require('./local-logging');
Expand Down Expand Up @@ -249,6 +249,25 @@ const getAllEnvironmentBackups = (): Promise<Project[]> =>
`,
);

const getEnvironmentBackups = (openshiftProjectName: string): Promise<Project[]> =>
graphqlapi.query(
`
query environmentByOpenshiftProjectName($openshiftProjectName: String!) {
environmentByOpenshiftProjectName(openshiftProjectName: $openshiftProjectName) {
id
name
openshiftProjectName
project {
name
}
backups {
...${backupFragment}
}
}
}
`, { openshiftProjectName }
);

const updateCustomer = (id: number, patch: CustomerPatch): Promise<Object> =>
graphqlapi.mutate(
`
Expand Down Expand Up @@ -965,6 +984,7 @@ module.exports = {
deleteBackup,
updateRestore,
getAllEnvironmentBackups,
getEnvironmentBackups,
updateUser,
deleteUser,
addUserToCustomer,
Expand Down
58 changes: 58 additions & 0 deletions node-packages/commons/src/lokka-transport-http-retry.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
const {
Transport: LokkaTransportHttp,
} = require('@lagoon/lokka-transport-http');
const fetchUrl = require('node-fetch');

class NetworkError extends Error {}
class ApiError extends Error {}

// Retries the fetch if operational/network errors occur
const retryFetch = (endpoint, options, retriesLeft = 5, interval = 1000) =>
new Promise((resolve, reject) =>
fetchUrl(endpoint, options)
.then(response => {
if (response.status !== 200 && response.status !== 400) {
throw new NetworkError(`Invalid status code: ${response.status}`);
}

return response.json();
})
.then(({ data, errors }) => {
if (errors) {
const error = new ApiError(`GraphQL Error: ${errors[0].message}`);
error.rawError = errors;
error.rawData = data;
throw error;
}

resolve(data);
})
.catch(error => {
// Don't retry if limit is reached or the error was not network/operational
if (retriesLeft === 1 || error instanceof ApiError) {
reject(error);
return;
}

setTimeout(() => {
retryFetch(endpoint, options, retriesLeft - 1).then(resolve, reject);
}, interval);
}),
);

class Transport extends LokkaTransportHttp {
constructor(endpoint, options = {}) {
super(endpoint, options);
}

send(query, variables, operationName) {
const payload = { query, variables, operationName };
const options = this._buildOptions(payload);

return retryFetch(this.endpoint, options);
}
}

module.exports = {
Transport,
};
2 changes: 2 additions & 0 deletions services/logs-forwarder/.lagoon.multi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -252,6 +252,8 @@ objects:
total_limit_size 15GB
flush_thread_count 8
overflow_action block
retry_type periodic
retry_wait 10s
</buffer>
id_key viaq_msg_id
remove_keys viaq_msg_id
Expand Down
2 changes: 2 additions & 0 deletions services/logs-forwarder/.lagoon.single.yml
Original file line number Diff line number Diff line change
Expand Up @@ -235,6 +235,8 @@ objects:
total_limit_size 15GB
flush_thread_count 8
overflow_action block
retry_type periodic
retry_wait 10s
</buffer>
id_key viaq_msg_id
remove_keys viaq_msg_id
Expand Down

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ const moment = require('moment');
const { sendToLagoonLogs } = require('@lagoon/commons/src/logs');
const {
addBackup,
getAllEnvironmentBackups
getEnvironmentBackups
} = require('@lagoon/commons/src/api');
const R = require('ramda');

Expand Down Expand Up @@ -50,35 +50,30 @@ async function resticbackupSnapshotFinished(webhook: WebhookRequestData) {
const { webhooktype, event, uuid, body } = webhook;

try {
// Get a list of all environments and their existing backups.
const allEnvironmentsResult = await getAllEnvironmentBackups();
const allEnvironments = R.prop('allEnvironments', allEnvironmentsResult)
const existingBackupIds = R.pipe(
R.map(env => env.backups),
R.flatten(),
R.map(backup => backup.backupId)
)(allEnvironments);

// The webhook contains all existing and new snapshots made for all
// environments. Filter out snapshots that have already been recorded and
// group remaining (new) by hostname.
// environments. Group by hostname.
const incomingSnapshots = R.prop('snapshots', body);
const newSnapshots = R.pipe(
R.reject(snapshot => R.contains(snapshot.id, existingBackupIds)),
const snapshotsByHostname = R.pipe(
// Remove pod names suffix from hostnames.
R.map(R.over(R.lensProp('hostname'), R.replace(/(-cli|-mariadb|-nginx|-solr|-node|-elasticsearch|-redis|-[\w]+-prebackuppod)$/, ''))),
R.groupBy(snapshot => snapshot.hostname),
R.toPairs()
)(incomingSnapshots);

for (const [hostname, snapshots] of newSnapshots) {
const environment = R.find(R.propEq('openshiftProjectName', hostname), allEnvironments);
for (const [hostname, snapshots] of snapshotsByHostname) {
// Get environment and existing backups.
const environmentResult = await getEnvironmentBackups(hostname);
const environment = R.prop('environmentByOpenshiftProjectName', environmentResult)

if (!environment) {
continue;
}

for (const snapshot of snapshots) {
// Filter out snapshots that have already been recorded.
const existingBackupIds = R.pluck('backupId', environment.backups);
const newSnapshots = R.reject(snapshot => R.contains(snapshot.id, existingBackupIds), snapshots);

for (const snapshot of newSnapshots) {
try {
const newBackupResult = await saveSnapshotAsBackup(
snapshot,
Expand Down
5 changes: 5 additions & 0 deletions yarn.lock
Original file line number Diff line number Diff line change
Expand Up @@ -7746,6 +7746,11 @@ node-fetch@^2.1.2, node-fetch@^2.2.0:
resolved "https://registry.yarnpkg.com/node-fetch/-/node-fetch-2.3.0.tgz#1a1d940bbfb916a1d3e0219f037e89e71f8c5fa5"
integrity sha512-MOd8pV3fxENbryESLgVIeaGKrdl+uaYhCSSVkjeOb/31/njTpcis5aWfdqgNlHIrKOLRbMnfPINPOML2CIFeXA==

node-fetch@^2.6.0:
version "2.6.0"
resolved "https://registry.yarnpkg.com/node-fetch/-/node-fetch-2.6.0.tgz#e633456386d4aa55863f676a7ab0daa8fdecb0fd"
integrity sha512-8dG4H5ujfvFiqDmVu9fQ5bOHUC15JMjMY/Zumv26oOvvVJjM67KF8koCWIabKQ1GJIa9r2mMZscBq/TbdOcmNA==

node-int64@^0.4.0:
version "0.4.0"
resolved "https://registry.yarnpkg.com/node-int64/-/node-int64-0.4.0.tgz#87a9065cdb355d3182d8f94ce11188b825c68a3b"
Expand Down

0 comments on commit 2bac352

Please sign in to comment.