Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/master' into relationalai-2023…
Browse files Browse the repository at this point in the history
…1201
  • Loading branch information
rbvermaa committed Dec 1, 2023
2 parents a3d3504 + 8f48e4d commit a8ff0ad
Show file tree
Hide file tree
Showing 87 changed files with 2,389 additions and 1,032 deletions.
6 changes: 3 additions & 3 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@ on:
push:
jobs:
tests:
runs-on: ubuntu-18.04
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: cachix/install-nix-action@v16
- uses: cachix/install-nix-action@v17
#- run: nix flake check
- run: nix-build -A checks.x86_64-linux.build -A checks.x86_64-linux.validate-openapi
2 changes: 0 additions & 2 deletions configure.ac
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,6 @@ AC_PROG_LN_S
AC_PROG_LIBTOOL
AC_PROG_CXX

CXXFLAGS+=" -std=c++17"

AC_PATH_PROG([XSLTPROC], [xsltproc])

AC_ARG_WITH([docbook-xsl],
Expand Down
129 changes: 129 additions & 0 deletions doc/architecture.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
This is a rough overview from informal discussions and explanations of inner workings of Hydra.
You can use it as a guide to navigate the codebase or ask questions.

## Architecture

### Components

- Postgres database
- configuration
- build queue
- what is already built
- what is going to build
- `hydra-server`
- Perl, Catalyst
- web frontend
- `hydra-evaluator`
- Perl, C++
- fetches repositories
- evaluates job sets
- pointers to a repository
- adds builds to the queue
- `hydra-queue-runner`
- C++
- monitors the queue
- executes build steps
- uploads build results
- copy to a Nix store
- Nix store
- contains `.drv`s
- populated by `hydra-evaluator`
- read by `hydra-queue-runner`
- destination Nix store
- can be a binary cache
- e.g. `[cache.nixos.org](http://cache.nixos.org)` or the same store again (for small Hydra instances)
- plugin architecture
- extend evaluator for new kinds of repositories
- e.g. fetch from `git`

### Database Schema

[https://github.com/NixOS/hydra/blob/master/src/sql/hydra.sql](https://github.com/NixOS/hydra/blob/master/src/sql/hydra.sql)

- `Jobsets`
- populated by calling Nix evaluator
- every Nix derivation in `release.nix` is a Job
- `flake`
- URL to flake, if job is from a flake
- single-point of configuration for flake builds
- flake itself contains pointers to dependencies
- for other builds we need more configuration data
- `JobsetInputs`
- more configuration for a Job
- `JobsetInputAlts`
- historical, where you could have more than one alternative for each input
- it would have done the cross product of all possibilities
- not used any more, as now every input is unique
- originally that was to have alternative values for the system parameter
- `x86-linux`, `x86_64-darwin`
- turned out not to be a good idea, as job set names did not uniquely identify output
- `Builds`
- queue: scheduled and finished builds
- instance of a Job
- corresponds to a top-level derivation
- can have many dependencies that don’t have a corresponding build
- dependencies represented as `BuildSteps`
- a Job is all the builds with a particular name, e.g.
- `git.x86_64-linux` is a job
- there maybe be multiple builds for that job
- build ID: just an auto-increment number
- building one thing can actually cause many (hundreds of) derivations to be built
- for queued builds, the `drv` has to be present in the store
- otherwise build will fail, e.g. after garbage collection
- `BuildSteps`
- corresponds to a derivation or substitution
- are reused through the Nix store
- may be duplicated for unique derivations due to how they relate to `Jobs`
- `BuildStepOutputs`
- corresponds directly to derivation outputs
- `out`, `dev`, ...
- `BuildProducts`
- not a Nix concept
- populated from a special file `$out/nix-support/hydra-build-producs`
- used to scrape parts of build results out to the web frontend
- e.g. manuals, ISO images, etc.
- `BuildMetrics`
- scrapes data from magic location, similar to `BuildProducts` to show fancy graphs
- e.g. test coverage, build times, CPU utilization for build
- `$out/nix-support/hydra-metrics`
- `BuildInputs`
- probably obsolute
- `JobsetEvalMembers`
- joins evaluations with jobs
- huge table, 10k’s of entries for one `nixpkgs` evaluation
- can be imagined as a subset of the eval cache
- could in principle use the eval cache

### `release.nix`

- hydra-specific convention to describe the build
- should evaluate to an attribute set that contains derivations
- hydra considers every attribute in that set a job
- every job needs a unique name
- if you want to build for multiple platforms, you need to reflect that in the name
- hydra does a deep traversal of the attribute set
- just evaluating the names may take half an hour

## FAQ

Can we imagine Hydra to be a persistence layer for the build graph?

- partially, it lacks a lot of information
- does not keep edges of the build graph

How does Hydra relate to `nix build`?

- reimplements the top level Nix build loop, scheduling, etc.
- Hydra has to persist build results
- Hydra has more sophisticated remote build execution and scheduling than Nix

Is it conceptually possible to unify Hydra’s capabilities with regular Nix?

- Nix does not have any scheduling, it just traverses the build graph
- Hydra has scheduling in terms of job set priorities, tracks how much of a job set it has worked on
- makes sure jobs don’t starve each other
- Nix cannot dynamically add build jobs at runtime
- [RFC 92](https://github.com/NixOS/rfcs/blob/master/rfcs/0092-plan-dynamism.md) should enable that
- internally it is already possible, but there is no interface to do that
- Hydra queue runner is a long running process
- Nix takes a static set of jobs, working it off at once
1 change: 1 addition & 0 deletions doc/manual/src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
- [Hydra jobs](./jobs.md)
- [Plugins](./plugins/README.md)
- [Declarative Projects](./plugins/declarative-projects.md)
- [RunCommand](./plugins/RunCommand.md)
- [Using the external API](api.md)
- [Webhooks](webhooks.md)
- [Monitoring Hydra](./monitoring/README.md)
Expand Down
27 changes: 24 additions & 3 deletions doc/manual/src/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,26 @@ in the hydra configuration file, as below:
</hydra_notify>
```

hydra-queue-runner's Prometheus service
---------------------------------------

hydra-queue-runner supports running a Prometheus webserver for metrics. The
exporter's address defaults to exposing on `127.0.0.1:9198`, but is also
configurable through the hydra configuration file and a command line argument,
as below. A port of `:0` will make the exposer choose a random, available port.

```conf
queue_runner_metrics_address = 127.0.0.1:9198
# or
queue_runner_metrics_address = [::]:9198
```

```shell
$ hydra-queue-runner --prometheus-address 127.0.0.1:9198
# or
$ hydra-queue-runner --prometheus-address [::]:9198
```

Using LDAP as authentication backend (optional)
-----------------------------------------------

Expand All @@ -111,8 +131,8 @@ use LDAP to manage roles and users.
This is configured by defining the `<ldap>` block in the configuration file.
In this block it's possible to configure the authentication plugin in the
`<config>` block. All options are directly passed to `Catalyst::Authentication::Store::LDAP`.
The documentation for the available settings can be found [here]
(https://metacpan.org/pod/Catalyst::Authentication::Store::LDAP#CONFIGURATION-OPTIONS).
The documentation for the available settings can be found
[here](https://metacpan.org/pod/Catalyst::Authentication::Store::LDAP#CONFIGURATION-OPTIONS).

Note that the bind password (if needed) should be supplied as an included file to
prevent it from leaking to the Nix store.
Expand Down Expand Up @@ -159,13 +179,14 @@ Example configuration:
<role_search_options>
deref = always
</role_search_options>
</store>
</config>
<role_mapping>
# Make all users in the hydra_admin group Hydra admins
hydra_admin = admin
# Allow all users in the dev group to restart jobs and cancel builds
dev = restart-jobs
dev = cancel-builds
dev = cancel-build
</role_mapping>
</ldap>
```
Expand Down
2 changes: 1 addition & 1 deletion doc/manual/src/hacking.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ On NixOS:

```nix
{
nix.trustedUsers = [ "YOURUSER" ];
nix.settings.trusted-users = [ "YOURUSER" ];
}
```

Expand Down
15 changes: 3 additions & 12 deletions doc/manual/src/plugins/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -172,17 +172,6 @@ Sets Gitlab CI status.

- `gitlab_authorization.<projectId>`

## HipChat notification

Sends hipchat chat notifications when a build finish.

### Configuration options

- `hipchat.[].jobs`
- `hipchat.[].builds`
- `hipchat.[].token`
- `hipchat.[].notify`

## InfluxDB notification

Writes InfluxDB events when a builds finished.
Expand All @@ -192,10 +181,12 @@ Writes InfluxDB events when a builds finished.
- `influxdb.url`
- `influxdb.db`

## Run command
## RunCommand

Runs a shell command when the build is finished.

See [The RunCommand Plugin](./RunCommand.md) for more information.

### Configuration options:

- `runcommand.[].job`
Expand Down
83 changes: 83 additions & 0 deletions doc/manual/src/plugins/RunCommand.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
## The RunCommand Plugin

Hydra supports executing a program after certain builds finish.
This behavior is disabled by default.

Hydra executes these commands under the `hydra-notify` service.

### Static Commands

Configure specific commands to execute after the specified matching job finishes.

#### Configuration

- `runcommand.[].job`

A matcher for jobs to match in the format `project:jobset:job`. Defaults to `*:*:*`.

**Note:** This matcher format is not a regular expression.
The `*` is a wildcard for that entire section, partial matches are not supported.

- `runcommand.[].command`

Command to run. Can use the `$HYDRA_JSON` environment variable to access information about the build.

### Example

```xml
<runcommand>
job = myProject:*:*
command = cat $HYDRA_JSON > /tmp/hydra-output
</runcommand>
```

### Dynamic Commands

Hydra can optionally run RunCommand hooks defined dynamically by the jobset. In
order to enable dynamic commands, you must enable this feature in your
`hydra.conf`, *as well as* in the parent project and jobset configuration.

#### Behavior

Hydra will execute any program defined under the `runCommandHook` attribute set. These jobs must have a single output named `out`, and that output must be an executable file located directly at `$out`.

#### Security Properties

Safely deploying dynamic commands requires careful design of your Hydra jobs. Allowing arbitrary users to define attributes in your top level attribute set will allow that user to execute code on your Hydra.

If a jobset has dynamic commands enabled, you must ensure only trusted users can define top level attributes.


#### Configuration

- `dynamicruncommand.enable`

Set to 1 to enable dynamic RunCommand program execution.

#### Example

In your Hydra configuration, specify:

```xml
<dynamicruncommand>
enable = 1
</dynamicruncommand>
```

Then create a job named `runCommandHook.example` in your jobset:

```
{ pkgs, ... }: {
runCommandHook = {
recurseForDerivations = true;
example = pkgs.writeScript "run-me" ''
#!${pkgs.runtimeShell}
${pkgs.jq}/bin/jq . "$HYDRA_JSON"
'';
};
}
```

After the `runcommandHook.example` build finishes that script will execute.
3 changes: 3 additions & 0 deletions doc/manual/src/plugins/declarative-projects.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ To configure a static declarative project, take the following steps:
"checkinterval": 300,
"schedulingshares": 100,
"enableemail": false,
"enable_dynamic_run_command": false,
"emailoverride": "",
"keepnr": 3,
"inputs": {
Expand All @@ -53,6 +54,7 @@ To configure a static declarative project, take the following steps:
"checkinterval": 300,
"schedulingshares": 100,
"enableemail": false,
"enable_dynamic_run_command": false,
"emailoverride": "",
"keepnr": 3,
"inputs": {
Expand Down Expand Up @@ -92,6 +94,7 @@ containing the configuration of the jobset, for example:
"checkinterval": 300,
"schedulingshares": 100,
"enableemail": false,
"enable_dynamic_run_command": false,
"emailoverride": "",
"keepnr": 3,
"inputs": {
Expand Down
13 changes: 9 additions & 4 deletions doc/manual/src/projects.md
Original file line number Diff line number Diff line change
Expand Up @@ -378,13 +378,18 @@ This section describes how it can be implemented for `gitea`, but the approach f
analogous:

* [Obtain an API token for your user](https://docs.gitea.io/en-us/api-usage/#authentication)
* Add it to your `hydra.conf` like this:
* Add it to a file which only users in the hydra group can read like this: see [including files](configuration.md#including-files) for more information
```
<gitea_authorization>
your_username=your_token
</gitea_authorization>
```

* Include the file in your `hydra.conf` like this:
``` nix
{
services.hydra-dev.extraConfig = ''
<gitea_authorization>
your_username=your_token
</gitea_authorization>
Include /path/to/secret/file
'';
}
```
Expand Down
Loading

0 comments on commit a8ff0ad

Please sign in to comment.